Can I get facebook unique share count?

How does Facebook get the images so quickly when posting a URL (Share a Link)?

  • In a requirement at work, we want to implement a fast and scalable system to do what "Share a Link" does on Facebook. We need to get the images from the Link and that to superfast. So checking if anyone here can shed some light on the internals of how to achieve this

  • Answer:

    Older thread, but I'm going to clear a few things up here. 1. Facebook does not use . They have their own internal solution. 2. Facebook uses not oEmbed for scraping urls. "Fast" is mostly due to network effects. Once a URL is fetched once, Facebook can cache the result for subsequent fetches. Otherwise they incur the penalty of making http requests. When an image is fetched, they don't download it and save it first. Facebook uses an image proxy to display it in the share widget, once posted, the image is saved. If you want to create something on this magnitude it's tough because you will probably never get the network effects that allow you to have 85% of links in cache. We (Embedly) have about a 70%, but that number is actually going down with a more diverse set of developers using us. Embedly uses a mix of Python, Tornado, Membase and Cassandra to accomplish this. I'd take a look at setting up a simple node.js (or anything asynchronous) server, look for open graph tags and don't worry about saving images till after the post. Hope that helps.

Sean Creeley at Quora Visit the source

Was this solution helpful to you?

Other answers

To piggy back off of , one option you have is . I personally have used it on some of my larger scale apps and it helped a ton. Some reasons: oEmbed is simple, and though it's an open standard, there isn't a standard URL path companies use. In other words, you may find yourself having a lot of "ifs" in your code trying to determine what domain a media URL is pointing at, and then construct a path based on that finding ( http://fooA/services/oembed vs http://fooB/oembed/2 ). In fact, some companies even have different oEmbed paths for media types like /picture/oembed, /video/oembed, and /audio/oembed, so you'll have to account for this and provide wild cards as necessary ( /*/oembed ) If you want your photo extractor to work all the time, you're going to have to do some tedious research in accounting for all the different companies/services a person may take a URL from. To be fair, if you cover the top few players such as , , or , you'll be covering the 90% case. ( there are more than just those 3, and I'm not even sure you care about video for your use case ) I believe some media companies would actually rate-limit/CAPTCHA your service if it requested too many oEmbed extractions, in a specific timeframe. If this is still the case, you'll want to account for that as well. Embedly's customer service is top notch, and you get analytics for things like # of times a media came from YouTube and how many images were shared for any given day. Again, it really depends on how soon you need it, what media you're trying to detect, what services you want it to work with, and how much money you're willing to spend ( Embedly is free up to a certain point ).

Romy Macasieb

Here's one way: Fetch per usual for the first share, then cache the result for subsequent shares (I can't say for sure but I would think that many links are shared by more than one person).

Ghalib Suleiman

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.