Few people would characterize the popular and influential microblogging service Twitter as "secure." Hack attacks on Twitter, and Twitter users, appear to be increasing (latest: Twitter hit with "Don't Click" clickjacking attack).
There are two potential security issues currently plaguing the popular social network: the popular use of link shorteners like TinyURL that lead users to unknown destinations, and a single login system that some hope will be fixed with the arrival of OAuth.
Don't click on that link!
Whenever I see an interesting tweet followed by a TinyURL link, I click it. I'll admit it. I don't even consider the ramifications of my actions and often, I'm surprised by where I go.
But I don't think I'm alone. TinyURL is the most common link you'll see on Twitter, but it's also one of the easiest ways for a malicious user to expose you to issues ranging from phishing scams to malware installs.
Luckily, Twitter is aware of this issue, and according to its co-founder, Biz Stone, the company is working on ways to make linking safer on the site.
"User security is absolutely a concern and we're working to make the interface safer in that regard," Stone told ZDNet blogger Jennifer Leggio. "We are looking into other ways to display shared links, for example noting whether a link goes to a picture or a video or some other media element. While more a feature, this could help in addressing some of the risk with the URL redirection."
Ginx, a new third-party service (which ironically requires your Twitter login credential to function; see next section), automatically expands shortened URLs before you click on them.
But what about stopping the use of TinyURL, Bit.ly, and other link-shortening services altogether? So far, Twitter has not indicated that it wants to do that and, as some security experts claim, it shouldn't consider that option.
Peter Gregory, a professional security expert and blogger at the Securitas Operandi blog, said he believes TinyURL use "basically comes down to trust: do you trust the source of the link, or is the creator of the link luring you into visiting a malicious Web site that will attempt to implant malware on your computer?"
Last year, TinyURL introduced a major improvement to the service that anyone using Twitter should use: a preview feature.
TinyURL's preview feature doesn't require registration and instead asks to place a cookie on your machine. Once you surf to the company's preview page, it asks if you want to enable a TinyURL preview. If so, you only need to click the link on the site and from that moment forward, any TinyURL link you click in Twitter or elsewhere across the Web won't immediately send you to the destination site. Instead, you will be redirected to a TinyURL preview page that allows you to examine the link and decide if you want to go to the respective page.
Bit.ly, another URL-shortening service, provides a Firefox plug-in that allows you to preview links. With both solutions running, the risk of being redirected to a malicious site should be cut down considerably, though not eliminated--nothing in link security is a sure thing.
But that's just one security issue Twitter and its users are forced to confront each day.
Open the front door, please
Do you want to update your Twitter stream with audio through Twitsay? How about updating Twitter while the site is down with Twitddict? Want to work with Twihrl, TwitterFeed, TweetDeck, or some other Twitter client? You can, as long as you give those services your Twitter username and password.
When did it become common practice to tell a start-up service you've only known about for 10 minutes the username and password of a service you rely on? Any security expert will tell us we should never give our password out to third parties and yet, if we want to use third-party Twitter tools, we need to do just that.
Last year, a service called GroupTweet, which takes direct messages sent to a user and republishes them as tweets on that respective user's account, was at the center of a controversy when one of its users, Orli Yakuel, had all her direct messages--many of them personal--become public. She claims that it happened because GroupTweet didn't make its operation clear and she didn't realize the service would work that way.
For his part, GroupTweet founder Aaron Forgue said in a statement that he was "100 percent at fault for this fiasco because I did a poor job of explaining the steps one needs to take to use GroupTweet. I sincerely apologize."
Maybe it's true that GroupTweet didn't explain the "steps one needs to take" to use it, but I think there's a better explanation for why that happened: GroupTweet, like dozens of other third-party Twitter tools, takes the user's password and has full access to their account. And when that information is provided, users are basically giving any third-party an open door to do what they wish with their Twitter account.
So far, the effect has been relatively minor and few people have been impacted by offering up their Twitter usernames and passwords to start-ups. But how much longer can we put ourselves at risk before a major outbreak of personal data hits the social network?
OAuth to the rescue?
Twitter developers have plans to protect us. According to the company's developers, they want to use OAuth--an open user-authentication protocol--to act as the middleman between your Twitter account and a third-party application.
If OAuth is implemented on Twitter, whenever you would go to a third-party site like GroupTweet and sign up to use the service, you would tell that third-party tool what your Twitter username is. That tool would then contact Twitter and ask for permission to perform its function on your account. Twitter would then ask you to verify that you wanted the third-party to perform an operation and would request that you input your password to prove it. Once complete, the third party could perform its service and you could have peace of mind knowing that you only doled out your password to Twitter itself.
You'll also be able to have some control of what exactly the third-party app can access from your Twitter account, and you'll be able to disable individual apps' access to your account as you wish.
According to Twitter's senior software engineer, Britt Selvitelle, who engaged in a Google Groups conversation for Firefox developers, Twitter "will be using OAuth as (its) primary form of token auth(orization)" because the system works well.
Alex Payne, Twitter's API lead, told ReadWriteWeb last month that Twitter has an acute understanding of what it needs to do to secure its service and it has a road map in place that's currently on schedule.
"Our launch plan entails a month or two in private beta, a similar amount of time in public beta, and then a final release," Payne told the blog. "After the final release, we'll allow OAuth to coexist with Basic Auth for no less than six months, and hopefully not much longer. OAuth should be the sole supported authentication mechanism for the Twitter API by the end of 2009."
A blog post on Inuda Innovations' Web site was posted Thursday, saying Twitter's OAuth private beta had begun.
If OAuth works for Twitter, as Payne suggests it will, the service's login issue will be eliminated and one important issue facing the company will be handled.
Is Twitter doing enough?
It's evident, based on the company's actions over the past year and with the news of OAuth entering private beta Thursday, that Twitter is focused on making security a key component in its plans going forward. But whether or not the company is doing enough, or fast enough, is up for debate.
Some might say Twitter needs to do more to help its users and ensure that as it becomes more "mainstream," it does everything it can to keep its users safe. But there are others who say Twitter users need to watch out for themselves and be just as savvy using the microblog as they are when trying to remove malware from Windows.
But it's Alex Payne, one of Twitter's most vocal security champions, who earlier this year posted a message on his blog after Twitter accounts were "hacked" earlier this year highlighting Twitter's security flaws.
"Several months after I joined Twitter in early 2007, I suggested to the team that we do a full internal security audit," Payne said in a blog post on his personal site. "Stop all work, context switch to Bad Guy Mode, find issues, fix them. I wish I could say that we've done that audit in its entirety, but the demands of a growing product supported by a tiny team overshadowed its priority.
"Now we're in an unwelcome position that many technical organizations get into: (we are) so far into a big code base that's never seen any substantial periodic audits that the only way to really find all the issues is to bring in some outside help--something I sincerely hope we end up doing, but is not my call," he continued. "Ultimately, outside security audits are the price a company pays for not building security mindfulness and education into day-to-day development."
Twitter did not respond to requests for comment.