I am not deeply versed in the intracacies of scalable architecture to know if this post talks about apples and oranges, or whether it talks about apples and apples, or if it's all just a bunch of horse apples.
Let's start with a Twitter blog post from January 18. Here's a portion.
Our open approach is very much driven by Twitter engineers like Blaine Cook. Blaine coded the distributed queue server Twitter uses to route vast numbers of messages in the background so front-end response time can remain quick.
Starling is a light-weight persistent queue server that speaks the MemCache protocol. It was built to drive Twitter's backend, and is in production across Twitter's cluster.
However, this morning Louis Gray linked to a post from AssetBar which proposed a new solution for Twitter.
Twitter’s scaling problem is exactly the same thing that makes it valuable: their database of users. And getting a traditional SQL /Relational DB to scale horizontally is pretty tough. Sharding works for some apps but not others.
It so happens that our new distributed database technology is rather well suited for twitter-style high-volume reliable messaging. If there is sufficient community interest we could help solve downtime by putting together a “twitter-proxy” that keeps twitter users on twitter, but provides an additional layer of data accessibility in the ecosystem. Not compete, just help keep users happy.
So I have a question for my more technical readers - are Starling and AssetBar two fundamentally different approaches?
Tom Petty's second and third breakdowns
-
I just authored a post on my "JEBredCal" blog entitled "Breakouts, go ahead
and give them to me." I doubt that many people will realize why the title
was...
3 years ago
0 comments:
Post a Comment