About Vincent Toubiana

I'm an IT Expert at CNIL (the French DPA) and a former NYU PostDoc who maintains and develops the TrackMeNot extension. On this blog I'll express my personal opinion about search engines and how they handle privacy issues.

Facebook Graph Search: Showing what is not Shared

When Facebook announced Graph Search, they emphasized that they designed it with privacy in mind and yet made two different statements. First, M. Zuckerberg said that it’ll give access only to « things that people have shared with you »   while T. Stocky said that «[you] can only search for what [you] can already see on Facebook».
I define the « content I share with you» as the content you can see on my timeline which is in fact a subset of the content you could see about me on Facebook. But Facebook has a different definition and considers « content shared » as everything about me that is visible, even if it normally requires a considerable effort to find it.

Finding pictures with Graph Search

Facebook made it clear that hiding photos on your timeline is no longer enough to prevent people from seeing them. With graph search, it’s now very simple to find all the photos of someone that are visible to you.
For instance, if one of your friends is tagged on pictures that he decided to remove from its timeline, these “hidden” photos will appear in graph search if you have access to them.
It was already possible to find « hidden » pictures a friend was tagged on but it required a considerable amount of time and effort: you had to go through the list of all his friends and check their pictures in case your friend might appear on some of them. Unless you were really creepy, your friends were safe to assume that most of their « hidden » pictures would not be viewed by you. That’s no longer the case and to control who can see your “hidden” pictures you’ll have to delete tags or ask your friends to limit the pictures visibility .

Removing tag is not the solution

Tags is not only a feature used to annotate content, it’s also used to know when someone comments a picture you appear on.  If you delete the tag, you lose the possibility to quickly know how people react to a photo. Not removing a tag is different than sharing a picture. Assuming that people want to share every picture they’re tagged on is wrong, especially when they’re a « share » button that allows them to do precisely that.
Unlike posts on your timeline, tags don’t have to be reviewed before they appear in Graph Search. To control photos of you that will appear in graph search, you have to frequently visit Facebook and remove unwanted tags. You have no option to proactively control your image on Facebook other than relying on your friends to not tag you without your consent.

The case of friends list

Strangely Facebook did not adopt the same definition of “sharing” with the “Friends list”. Assuming we’re friends; if you’ve decided to hide your friend list from your timeline, I can try to recompose it by visiting each of your friend’s timeline and check that you appear as a mutual friend. It would require knowing some of your friends first, but that’s fairly easy if they posted something on your timeline. By iterating this process, I could retrieve a subset of your friends. Like photos you are tagged on, this subset is presumably shared by you but won’t appear if I search for the list of your friends.
 

Google’s Ad Targeting under the new Privacy Policy

Google new privacy policy will be effective starting March the 1st. The Electronic Frontier Foundation (EFF) suggests to delete your Web Search History, and I strongly recommend to follow this advice because:

1)      The searches that have been recorded in your Web Search History before March the 1st will be subject to this policy [1].
2)      Advertisers could target ads based on your browsing interests and interests inferred from your Web Search History.

The Good Points

First, I have to say that Google did a remarkable job advertising the new policy: notifications are everywhere. I don’t remember any of the policy updates being that much advertised and then commented.

Another good point is that many privacy policies have been merged in one privacy policy. It is no longer required to have a dozen tabs opened to have a good view of the policy. However, you still need to have an extra tab on the FAQ page with the definitions required to understand the Policy. Google could have used the empty space in the right column to display these definitions (like search result previews).

 

The really bad one

So much for the good points, now let’s discuss the policy itself. The bottom line is this policy would allow advertisers target you based on your web search profile and other interests you expressed in your emails or through your use of Google services. And this list of interests can be combined with the list of interests they built based on your DoubleClick cookie.

Google does not need our Opt-in consent to combine your web search profile to your DoubleClick cookie information. Starting March the 1st, Google could adopt a solution similar to what is deployed by Microsoft to target ads based on your search interests, although a sentence in the policy seems to prevent such use of your data:

“We will not combine DoubleClick cookie information with personally identifiable information unless we have your opt-in consent.”

In fact, it means that your Double-Click cookie will not be linked to your personally identifiable information. So Google can not put your name in front of the list of interests they inferred from your browsing behavior and will not put your name (or any other PII) in the ads you see. Because your Web Search history is likely to be unique, it identifies you and therefore can not be combined to your DoubleClick profile [2].

But your search profile (i.e. the list of interests inferred from your search history) is unlikely to be unique and therefore does not identify you so Google can combine it with your DoubleClick cookie information [2]. I believe they could also include some the of search results you clicked on to retarget you.

Similarly, your age, gender and interests expressed during Gtalk and Gmail discussions (or any other interest that Google could infer but that you would not be the only one to express) could be associated to your DoubleClick cookie. If you have any suggestion to deal with these data, do not hesitate to share it.

[1] See Google Policy FAQ: “Our new Privacy Policy applies to all information stored with Google on March 1, 2012 and to information we collect after that date.”
[2] Google defines Personal information as information “you provide to us which personally identifies you, such as your name, email address or billing information, or other data which can be reasonably linked to such information by Google”.

A list of Google services vulnerable to Session hijacking

After finding an information leakage in Google Search, I’ve been curious to see if there were no other pieces of information that could be gleaned from other Google services. To verify this, I visited my Google Dashboard, replaced my SID cookie and clicked on all the HTTP services that were listed.
My first attempt failed as I was systematically redirected to the account page where I was asked to enter my password. I then tried to also spoof the HSID cookie — also sent clear text — but because HSID cookie is an HTTPOnly cookie [1], it cannot be modified by a script or by the user: the cookie can only be modified by the server.

Spoofing HTTPOnly cookie

The best solution I found was to install a local proxy to intercept the HTTP traffic and then modify the cookies (I recommend Burp free edition which does a good job). It is then quite simple to replace the HSID cookie in the sent requests.
This time it worked, I was able to log into two services under with the spoof account:

  • Google Alerts: I was able to view and edit the mail alerts that were configured for the spoofed account.
  • Google Social Content: This service lists all your Gtalk contacts (that means most of the people you chatted with a couple of times).
  • Google Contacts: This is the Gmail contacts manager, it allows you to view, edit and create Gmail contacts. Quite useful if you want to get a list of persons to spam. An attacker could also attempt to replace the mail address of a contact with its own mail address.
  • Google Reader: You could see and edit RSS subscription.
  • Google Maps : You could see the maps associated to the spoof account.

There might be other vulnerable services but I think this list is already quite exhaustive and each of the listed service is likely to provide sensitive information.

Design flaws

Spoofing an unsecured cookie to hijack a session is nothing new. Nevertheless, there are two design flaws that HSID and SID cookies spoofing more critical:

  • These cookies can be used to provide an access to multiple services: when Google created these services, it did not assign a specific cookie for each of them. Therefore a single pair of cookies provides an access to all these services.
  • SID cookies are still valid even after the user logout: if a user thinks his session has been compromised, there is nothing he can do to revoke it. It seems that this was already pointed out 4 years ago .

Conclusion

Google is working on these issues and they should be fixed soon (users are already redirected to encrypted search [2]). Therefore, a next step would be to check if other major Web service providers have a better cookie policy.

Reference:
[1] Jeff Atwood, “Protecting Your Cookies: HttpOnly”, http://www.codinghorror.com/blog/2008/08/protecting-your-cookies-httponly.html
[2] Evelyn Kao, “Making search more secure”, http://googleblog.blogspot.com/2011/10/making-search-more-secure.html

Searching session cookies and click-streams

In our paper on Google’s session cookie information leakage, Vincent Verdot and I described how to captures SID cookies on a shared network and run the attack with Firesheep (see the previous post).

Nevertheless, there are other ways to capture such cookies. For instance one could use malware to capture search traffic, but the simplest solution remains to search SID cookies.

Redirecting traffic

Using a malware to redirect the traffic of infected computers through a proxy controlled by the attack would allow to capture session cookies. Such infection has recently been detected by Google which displayed a banner on its search page [1]. In that particular case, Google traffic redirection was merely a side effect which triggered the malware detection.

According to Google, a couple of millions of computers [1] were infected by this malware. Attackers could have captured a significant number of session cookies and run attacks described in our paper.

Googling for cookies

The simpler solution to find SID cookies is to search them. Typing the right query in Google provides a list of pages where people published captured HTTP traffic, including SID cookies (also works with Yahoo!).

If you replace your SID cookie by one of the cookies listed in these pages, you will receive the same personalized results than its owner. From these results you can quickly extract a list of visited results, Gmail contacts and Google+ acquaintances.

Not all these results contain full SID cookies and some of the listed SID cookies may have already expired, but this simple search should already provide many valid cookies to test the flaw. I’ve written a Chrome extension to simply replace the SID cookie for the “google.com” domain and quickly test different accounts. Once installed, click on the red button in the upper right corner, past the cookie value and click save.
On Firefox you could use the Web Developer extension to edit cookies (it does not seem to work on Firefox 5.0).

Linking data and PI

By publishing their (apparently innocuous) cookies users indirectly published part of their click-stream and associated it to their email address. Thus they established a public record of having visited these URLs [2], and this record is now linked to their name. From there, their full anonymized click-stream — not reduced to visited search results — could be de-anonymized by a tracking ad-network.

References

[1] Damian Menscher, “Using data to protect people from malware”, http://googleonlinesecurity.blogspot.com/2011/07/using-data-to-protect-people-from.html
[2] Arvind Narayanan, “There is no such thing as anonymous online tracking”, http://cyberlaw.stanford.edu/node/6701

Show me your Cookie and I’ll tell you what you visited

Web Search History Information Leakage

Back in February, I re-discovered a small flaw in Google Search: result personalization leaks the list of results you clicked on. This leak was already known and mentioned in a paper by Castelluccia et al., but several features added by Google made it critical.

  • First there is the possibility (for web search history users) to only view result that have already been visited (visit http://www.google.com/webhp?tbs=whv:1).
  • Second,  with Google Instant it is possible to view visited links quickly without living a trace in the victim Web Search history (the attack is not-destructive).
  • Third, when Google display previously visited search results, it used to provide the query that led to the results searching when he clicked on the result, thus the attacker knew which keywords the victim was usually searching for and could have enter these keywords in a search box to get new results which will suggest new keywords to type and so forth and so on…

The third point has been addressed by Google very recently, when they introduced the new interface with the black top bar.
Vincent Verdot and I wrote a paper about this flaw. In order to conduct an experiment, we’ve been working on a proof of concept and an evaluation tool that we used to gather results.

Proof of concept based on Firesheep

This proof of concept is based on Firesheep (I just added a module and modify the attack launched when a SID cookie was captured). Firesheep is only working with the latest version of Firefox 3.6, do not expect to run it on Firefox 5.
With our version of Firesheep, when a Google SID cookie is captured, the account name appears in the Firesheep sidebar. Double clicking on it starts the attack; double clicking again displays the retrieved list of visited links.

The Evaluation tool

We also designed a Firefox extension which downloads your web search history on your computer, issue a couple of search queries (mostly searching for extensions like: « .com, .fr, .us, .html, www, … ») and see how many clicked links can be retrieved.
We’ve run this experiment with a dozen of account and sent the result to Google. We’ll soon publish the paper as a technical report.

How to protect your Click History

We’ve been in contact with Google Security Team who is working on a fix that should soon be deployed. In the meantime, make sure you’re not logged in your Google account when you’re connected on an unsecured network.
If you do not use Web Search History you may also purge it and disable the feature (visit https://www.google.com/history).
Also, TrackMeNot and Unsearch will reduce the exposition of your click history.

 

Running the Test

If you want to run the test 5 minutes:

  • A Google Account with Web Search History enabled. To check that you activated it, visit https://www.google.com/history; if it asks you to turn on the feature, then you cannot help here. Thanks anyway for trying.
  • Install this Firefox extension (https://unsearcher.org/Test%20Flaw/ad@monitor.xpi), download it and then drag it and drop it in Firefox. Once the extension has been installed, you should restart Firefox.
  • Modify your Google Search preferences (http://www.google.com/preferences?hl=en) to disable “Google Instant” and set the number of returned result to 100 (instead of 10).
  • Sign out and Sign in again on Google.com.
  • In Firefox, click on “Tools-> ADMONITOR-> History”. A first message should appear to inform you that the extension is about to extract your search History. Click on OK and do not close the Firefox window.
  • After five minutes, another message will be prompted to inform you that the test is finished. It’ll tell you where you can find the generated file. A Firefox window should have open (not necessarily taking the focus). You can send me the content of this window via e-mail and we’ll integrate it in our experiment results.
  • You can remove the generated files and uninstall the sid@testextension.

Thanks for helping us.

A follow up on Google Policies

Last year, I started to analyze Google Search and Google Suggest logs retention policies for the NYU Privacy Research Group meetings. To complete this analysis, I’m trying to review policies of other Google services.

‘Personal Information’ vs ‘Information we collect’

While I just started this review, I noticed that Google seems to change the name of the section describing the recorded information. This section is either called:

  • “Personal Information” for +1, Blogger, Buzz, Notebook, Groups, Knol, Moderator, Music, Orkut, Picasa, Power Meter, SafeBrowsing , Sites, Voice, Web History and YouTube
  • “Information we collect” for Advisor, Checkout, Desktop, Gears, TV, Location, Mobile, Toolbar, Trader, Web Accelerator.

My first understanding was that for services that require a Google Account to be used, Google uses the terms “Personal information” otherwise it uses “Information we collect”. But there are several exceptions. For instance, SafeBrowsing does not require an account to be used but Google TV does.

In addition, explicit references to server logs are made in these Personal Information sections while Google does not consider server logs as Personal Information (see their FAQ).

The Knol bridge

A loophole in Knol Privacy Policy allows Google to link your IP address and cookies to your user account. Knol (for Knowledge) is Google’s alternative to wikipedia. You need to have a Google account to contribute to Knol and — like most for Privacy Policy of Google services — Google mentions that it :

‘records information [your] account activity (e.g., storage usage, number of log-ins, actions taken), data displayed or clicked in the Knol interface […] and other log information (e.g., browser type, IP address, date and time of access, cookie ID, referrer URL). If you are logged in we may associate that information with your account.
[emphasis is mine]

This last sentence is unusual and suggest that if you ever logged in and visited Knol, Google can associate your IP address and Cookie IDs to your Goolge Account — and all the personal information attached to it. From that, Google can directly de-anonymized the searches you did when you were not logged in.

A policy template

This loophole is certainly not intentional; this exact sentence appears in many privacy policies . As a matter of fact, this sentence also appears in YouTube and Blogger policies. Therefore we can assume that a same template has been used for services hosting user generated content.

However there are two big differences between Knol and Youtube or Blogger:

  • There is no explicit mention of the server logs in these policies. For these services, Google only mention that their ‘servers automatically record information about your use of the service’.
  • Both Blogger and YouTube have their own domain names, meaning that cookies you send to YoutTube are different from the cookies you send when you’re visiting a Google website. Unlike these services, Knol uses Google domain name. Therefore, you send to Knol cookies that you also send to Google when you are doing a search.

While not dramatic considering Knol relative lack of success, this mistake could have been more critical in the privacy policy of a more popular service.

Introducing Unsearch

Unsearch Capture Screen

My first post introduces ‘Unsearch’, the extension that gave its name to this blog. Unsearch is an extension for the Google Chrome browser that allows to search on Google without leaving traces on Google Search logs or on Google Web Search History. Normally, your searches on Google are recorded and logged with your IP address and cookies. While one can manage his web search history through this interface, it is not possible to manage Google search logs.

Motivations

Analyzing Google Log retention policies we found out that, even after anonymization, some pieces of information in Google search logs might lead to user identification ( for further information, see our paper). When you use Unsearch all pieces of information are deleted from Google servers within 15 days.

Also, one may not want to have all his searches recorded in his web search history. While it is possible to log out or to delete entries from Google Web Search History, I prefer to have the possibility to do ‘off the record’ searches directly on the search page. Especially now that the ‘Log-out’ button is one additional click away and that we have only three seconds before the search is recorded.

Paradoxically, with the new navigation bar, I find it more difficult to notice when I’m not logged into my account. Putting the risk of pseudonym usurpation aside, the account information is – in my opinion – less visible. That’s problematic because I can not remove searches that I did from someone else account.

How it works

When you type keywords in Google Search bar, you get Google Instant results but your query won’t be logged unless you click on the page or remain inactive for three seconds (moredetails).

When you click on Unsearch, it removes the keywords you typed in the search box (so they will not be recorded), but before that, it copies the Google Instant result and pasts it in another tab where your interactions are not monitored: you can browse the results without Google noticing (redirects are removed).

Because you never clicked on ‘Search’, Google won’t log your query and it won’t appear in your web search history either. And since you’re logged into your Google account, you’ll still get personalized search results. This approach is somehow complementary to TrackMeNot which lets user shape their search profile without issuing queries.

Shortcomings

Because Unsearch is based on Google Instant, this feature must be turned on for Unsearch to work. Furthermore, you have to click on Unsearch within three seconds or Google will log the query as a normal search. I tried to find a fix (like adding and removing space in the search bar) but I’m looking for a good solution.

In fact, the main drawback is that Unsearch takes advantage of Google Instant log retention policy. Should this policy change, Unsearch would no longer prevent Google from logging searches. Such change is very unlikely as Google never extended its log retention period or made its ‘anonymizing’ process less effective.

Furthermore, even if this policy is changed, countermeasures exist to prevent Google from logging most of the queries in their entirety (see some possible solution in Unsearch Presentation (.ppt) ).