A software bug that was identified during routine site maintenance was responsible for the error, Flickr Vice President Brett Wayn said in an online forum thread. Though the affected photos did not appear in search results, they were visible on Flickr between Jan. 18 and Feb. 7, he said.
Only a small percentage of photos, limited to those uploaded between April and December of 2012, were affected, Flickr said.
Flickr has not acknowledged the problem on its official blog; affected users were notified individually with the message that was posted to the forum.
Not surprisingly, users are ticked off, judging from their reactions online. It is “very worrying” that images’ privacy settings could be changed spontaneously, one person said on the forum.
The error is “very frustrating,” another said — “enough for me to go elsewhere.”
Moreover, Flickr may have made things worse in its attempt to fix the problem, by setting any potentially affected photos in users’ accounts to “private.” This means that the links and embeds associated with the images for other websites will no longer work, Flickr said.
Because making a public photo private on Flickr changes the image’s URL, the HTML code needs to be manually corrected for each photo, users are pointing out.
Flickr says it has put in place additional measures to prevent the problem from happening again.
The announcement was made by an official with the U.S. Federal Trade Commission (FTC) during a Do Not Track (DNT) event hosted by Mozilla, the maker of Firefox. Mozilla has been a major champion of the technology.
Twitter itself kept a low profile, saying only, “We applaud the FTC’s leadership on DNT,” Twitter tweeted from its own corporate account on Thursday.
“Twitter seems to be the one social network that’s doing the right thing [on privacy], said Brian Blau, a Gartner research director who specializes in consumer technology. “They’ve gone out of their way, compared to competitors, to stand up for users’ rights.”
Do Not Track relies on information in the HTTP header, part of the requests and responses sent and received by a browser as it communicates with a website, to signal that the user does not want to be tracked by online advertisers and sites. If a website or service abides by Do Not Track, it must stop tracking users’ movements, usually by discarding a Web cookie that handled the chore.
Twitter did exactly that, according to Jonathan Mayer, one of the two Stanford University researchers who came up with the HTTP header standard.
Twitter is the first social service to support Do Not Track, the initiative that was first endorsed by the FTC in late 2010.
The creators of the mobile app Girls Around Me drew sharp criticism Monday for helping men to “stalk” unsuspecting women, but the incident also reveals how much we still have to learn about what social networks reveal about us.
The app collected data from FourSquare, showing local bars where women had checked in, and matched that with information from their Facebook profile, including photos and sometimes their dating status. The end result was that the app’s users could see how many single women were in a particular nightspot, what they looked like and what their names were.
FourSquare blocked the app’s use of its API, claiming it violated its privacy policies. That forced its developer, the Russian company i-Free, to pull Girls Around Me from the App Store.
But i-Free didn’t hack into people’s accounts to get at the information, it only used what people had made freely available on their social networking sites. The incident shows how compiling such public information can make people uncomfortable when it’s done in unexpected ways.
“When you see something so out of context with what you expect, it ends up being shocking,” said Jules Polonetsky, the director of the Future of Privacy Forum. “I get that when I’m out in a big crowd, I’m not secret. But it’s still seems bizarre if someone scans every face in the crowd and then somehow identifies it. It seems to push beyond the appropriate context.”
The problem, to some extent, resides in a culture gap between developers, who think that if information is available, they can use it to innovate in any way they see fit, and users, who don’t always understand how revealing their digital information can be, privacy advocates said.
John M. Simpson, the director of Consumer Watchdog’s privacy project, said even if people understand what data they’re sharing on social networks, they don’t expect it to be “reconfigured so they can be hit upon.”
“Just because something is technologically possible is no justification for necessarily doing it,” he said.