Facebook Says It Deleted 865 Million Posts, Mostly Spam
SAN
FRANCISCO — Facebook has been under pressure for its failure to remove
violence, nudity, hate speech and other inflammatory content from its
site. Government officials, activists and academics have long pushed the
social network to disclose more about how it deals with such posts.
Now, Facebook is pulling back the curtain on those efforts — but only so far.
On Tuesday, the Silicon Valley company published numbers for the first time
detailing how much and what type of content it takes down from the
social network. In an 86-page report, Facebook revealed that it deleted
865.8 million posts in the first quarter of 2018, the vast majority of
which were spam, with a minority of posts related to nudity, graphic
violence, hate speech and terrorism.
Facebook also said it removed 583 million fake accounts in the same period. Of the accounts that remained, the company said 3 percent to 4 percent were fake.
Guy
Rosen, Facebook’s vice president of product management, said the
company had substantially increased its efforts over the past 18 months
to flag and remove inappropriate content. The inaugural report was
intended to “help our teams understand what is happening” on the site,
he said. Facebook hopes to continue publishing reports about its content
removal every six months or so.
Yet
the figures the company published were limited. Facebook declined to
provide examples of graphically violent posts or hate speech that it
removed, for example. The social network said it had taken down more
posts from its site in the first three months of 2018 than it had during
the last quarter of 2017, but it gave no specific figures from previous
years, making it hard to assess how much it had stepped up its efforts.
The
report also did not include all the posts that Facebook had removed.
After publication of this article, a Facebook spokeswoman said other
types of content had been taken down from the site in the first quarter
because they violated community standards, but those were not detailed
in the report because the company was still developing metrics to study
them.
Facebook
also used the new report to advance a push around artificial
intelligence to root out inappropriate posts. Facebook’s chief
executive, Mark Zuckerberg, has long highlighted A.I. as the main
solution to helping the company sift through the billions of pieces of
content that users put on its site every day, even though critics have
asked why the social network cannot hire more people to do the job.
“If
we do our job really well, we can be in a place where every piece of
content is flagged by artificial intelligence before our users see it,”
said Alex Schultz, Facebook’s vice president of data analytics. “Our
goal is to drive this to 100 percent.”
Facebook
is aiming for more transparency after a turbulent period. The company
has been under fire for a proliferation of false news, divisive messages
and other inflammatory content on its site, which in some cases have
led to real-life incidents. Graphic violence continues to be widely
shared on Facebook, especially in countries like Myanmar and Sri Lanka, stoking tensions and helping to fuel attacks and violence.
Facebook has separately been grappling with a data privacy scandal
over the improper harvesting of millions of its users’ information by
political consulting firm Cambridge Analytica. Mr. Zuckerberg has said that the company needs to do better and has pledged to curb the abuse of its platform by bad actors.
On Monday, as part of an attempt to improve protection of its users’ information, Facebook said it had suspended roughly 200 third-party apps that collected data from its members while it undertook a thorough investigation.
The
new report about content removal was another step by Facebook to clean
up its site. Jillian York, the director for international freedom of
expression at the Electronic Frontier Foundation, said she welcomed
Facebook’s numbers.
“It’s a good move
and it’s a long time coming,” she said. “But it’s also frustrating
because we’ve known that this has needed to happen for a long time. We
need more transparency about how Facebook identifies content, and what
it removes going forward.”
Samuel
Woolley, research director of the Institute for the Future, a think tank
in Palo Alto, Calif., said Facebook needed to bring in more independent
voices to corroborate their numbers.
“Why
should anyone believe what Facebook says about this, when they have
such a bad track record about letting the public know about misuse of
their platform as it is happening?” he said. “We are relying on Facebook
to self-report on itself, without any independent vetting. That is
concerning to me.”
Facebook
previously declined to reveal its content removal efforts, citing a lack
of internal metrics. Instead, it published a country-by-country
breakdown of how many requests it received from governments to obtain
Facebook data or restrict content from Facebook users in that country.
Those figures did not specify what type of data the governments asked
for or what posts were restricted. Facebook also published a
country-by-country report on Tuesday.
According
to the new content removal report, about 97 percent of the 865.8
million pieces of content that Facebook took down from its site in the
first quarter was spam. About 2.4 percent of that deleted content had
nudity, Facebook said, with even smaller percentages of posts removed
for graphic violence, hate speech and terrorism.
In
the report, Facebook said its A.I. found 99.5 percent of terrorist
content on the site, leading to the removal of roughly 1.9 million
pieces of content in the first quarter. The A.I. also detected 95.8
percent of posts that were problematic because of nudity, with 21
million such posts taken down.
But
Facebook still relied on human moderators to identify hate speech
because automated programs have a hard time understanding context and
culture. Of the 2.5 million pieces of hate speech Facebook removed in
the first quarter, 38 percent was detected by A.I., according to the new
report.
Facebook said it also removed 3.4 million posts that had graphic violence, 85.6 percent of which were detected by A.I.
The
company did not break down the numbers of graphically violent posts by
geography, even though Mr. Schultz said that at times of war, people in
certain countries would be more likely to see graphic violence than
others. He said that in the future, Facebook hoped to publish
country-specific numbers.
The report
also did not include any figures on the amount of false news on Facebook
as the company did not have an explicit policy on removing misleading
news stories, Mr. Schultz said. Instead, Facebook has tried to deter the
spread of misinformation by removing spam sites that profit from
advertisements that run alongside false news, and by removing fake
accounts that spread them.
Correction:
An
earlier version of this article, using information provided by
Facebook, referred incorrectly to the 3 to 4 percent of accounts on the
social network that were fake. It is the percentage of Facebook accounts
that were fake even after a purge of such accounts. It is not the
percentage of Facebook accounts that were purged as being fake. The
article also misstated how often Facebook hopes to publish reports about
the content it removes. It is roughly every six months, not every
quarter.
____________________________________
Tech #BigBusiness 576 2 Free Issues of Forbes
Social Media Roundup: Facebook Cryptocurrency Rumor, Instagram Emoji Slider Scale, Snapchat Rollback
“Social Media Roundup” is a weekly
roundup of news pertaining to all of your favorite websites and
applications used for social networking. Published on Sundays, “Social
Media Roundup” will help you stay up-to-date on all the important social
media news you need to know.
FacebookLeadership Team Reorganization
Facebook has reorganized its leadership teams this past week, according to Recode. This included shake ups at the parent company along with Instagram, Messenger and WhatsApp. One of the teams being created as part of the reorganization will be focused on blockchain technology. And Recode said that Facebook is structuring the company under three main groups, including apps, new platforms and infrastructure and central product services.
The apps division will be led by chief product officer Chris Cox. Facebook’s VP of Internet.org Chris Daniels will be overseeing the development of WhatsApp following the departure of Jan Koum. And Stan Chudnovsky will be the head of the Messenger team. David Marcus is moving from the head of Messenger to the team that is heading up blockchain initiatives. And Will Cathcart is going to focus on the main Facebook app.
The new platforms and infrastructure team will be headed up by CTO Mike Schroepfer. Reporting to Schroepfer includes Andrew “Boz” Bosworth (head of AR, VR and hardware teams), David Marcus (blockchain initiatives), Jay Parikh (head of team involved in privacy products and security initiatives), Kang-Xing Jin (head of Facebook Workplace) and Jerome Pesenti (head of artificial intelligence).
And the Central Product Services arm is going to be led by Javier Olivan. This division will handle ads, security and growth. Olivan will be managing Mark Rabkin (head of ads and local efforts), Naomi Gleit (community growth and social good) and Alex Schultz (growth marketing, data analytics and internationalization).
Adam Mosseri is moving from the News Feed to Instagram as the VP of Product. And the previous VP of Product at Instagram, Kevin Weil, is moving to the new blockchain team.
Cryptocurrency
According to Cheddar, Facebook is rumored to be considering its own cryptocurrency. It is believed that Facebook’s cryptocurrency would be used specifically for facilitating payments on the social network.
And Facebook is also looking into ways to utilize the digital currency using blockchain technology. This rumor coincides with Facebook’s decision to have David Marcus head up a blockchain division at Facebook.
Malicious Ads Purchased By Russians Released By Congress
According to USA Today, Democrats on the House Intelligence Committee have released the thousands of Russian Facebook ads last week. The Russian ads were used to influence tensions among Americans during and after the 2016 U.S. presidential election. The ads were bought by Internet Research Agency, which is an organization allegedly linked to the Kremlin. Facebook responded to this malicious content by restricting political ads and requiring the organizations purchasing them to be disclosed.
A large portion of the ads were set up by Russians pretending to be Americans. And many of those ads had simply exploited divisive issues like immigration, race, gay rights and gun control to drive animosity between groups of people especially in states like Michigan, Pennsylvania, Virginia and Wisconsin.
Some of the ads were ineffective while others were seen over a million times. The ads started to run over two years starting around June 2015 and then increased in volume as the election drew closer.
Once Facebook turned the ads over to Congress, dozens of them were made public. And House Intelligence Committee leaders at the time said that all of the ads will be made public to increase awareness of the manipulation pushed by the Russian organization.
In a blog post, Facebook said it has started to deploy new tools and teams to identify threats proactively in the run-up to specific elections. Currently, Facebook is tracking over 40 elections. Going forward, Facebook has to tread carefully about how data is being handled considering it is still recovering from the Cambridge Analytica scandal in which personal details of 87 million users were exploited.
New Facebook Live Tools
Facebook pointed out that daily average broadcasts from verified publishers Pages increased 1.5X over the past year. And this past week, Facebook product manager Matt Labunka said new features are being rolled out to make it easier for publishers to go live.
Live API Update:
Facebook has made the setup process easier for users that frequently utilize the Live API. “Publishers and creators who frequently use the Live API have requested a more simplified stream setup process, and we've rolled out the ability to use a persistent stream key with an encoder when going live on Facebook,” wrote Labunka. “This means if you're a publisher or creator that goes live regularly, you now only need to send one stream key to production teams, and because a Page's stream key is permanent, it can be sent in advance of a shoot — making it easier to collaborate across teams and locations for live productions. Broadcasters can also save time by using the same stream key every time they start a new Live video.”
An example of where this has saved some time is how gaming creator Darkness429 goes live at 3PM every week day. Having a persistent stream key made this process easier for him.
Crossposting:
Facebook also launched Live Crossposting. This feature allows Pages to seamlessly publish a single broadcast across multiple Pages at the same time. And it will be displayed as an original post by each Page. Doing this would enable the Live stream to reach a broader audience.
Live Rewind:
Facebook is currently testing the ability for viewers to rewind Live videos as they are streaming live from Pages. Facebook said that CrossFit Games said that this feature would be “massive” for its viewers. “They have different points of discovery, want to go back, or miss a key play... It’s huge,” said CrossFit Games via Facebook. Once testing is completely, this feature should be available for all of Facebook’s users.
Stories Soundtrack Test
Instagram is reportedly testing a feature that would allow users to add music to Stories based on code that was found within its Android app, according to TechCrunch. The “music stickers” would essentially allow users to search for music and add song clips to posts. This is made possible through Facebook’s partnership with music labels. Plus Instagram is testing the ability to automatically detect a song that you are listening to in the background and automatically create a sticker with the artist and song information.
Jane Manchun Wong was briefly able to test out the feature:
DM Improvements For Businesses
Instagram now makes it easier for businesses to manage direct messages through its platform. Businesses will now see important customer messages in the main Direct inbox rather than the pending folder. And businesses will be able to star and filter conversations to follow up on. Plus Instagram is currently testing quick replies so businesses can easily respond to common questions.
Emoji Sliding Scale
Instagram has launched a new emoji polling slider feature for Stories that allows your friends to rate content on a scale rather than the standard yes/no buttons. The emoji animates as you drag it back and forth on the scale.
"To add an emoji slider sticker to your story, select it from the sticker tray after taking a photo or video. Place it anywhere you’d like and write out your question," said Instagram in a blog post. "Then, set the emoji that best matches your question’s mood. You can pick from a few of the most popular emoji, or choose almost any emoji from your library if you have something specific in mind."
Here is a video of how it works:
Klout
Shut Down
Lithium has announced that it is going to be shutting down Klout, the website that scored the influential power of social media users. This became known as the Klout Score. It was reported that Lithium had acquired Klout for $200 million back in March 2014. And Klout confirmed the shut down on Twitter:
Slack
8 Million DAUs And 3 Million Paid Users
TechCrunch reported this past week that workplace collaboration company Slack has hit 8 million daily active users (DAUs) and 3 million paid users. This is up from September when Slack was reportedly hitting 6 million DAUs, 2 million paid users and $200 million in annual recurring revenue. Over half of Slack’s users are outside the U.S.
Snap
Tim Stone Named CFO
Snap's chief financial officer Drew Vollero is being succeeded by Tim Stone. Stone is a former VP of finance at Amazon who has a background in digital content. Vollero is going to pursue other opportunities and will remain as a paid “non-employee advisor” until August 15th to help with the transition.
“I am deeply grateful for Drew and his many contributions to the growth of Snap,” said CEO Evan Spiegel in a statement. “He has done an amazing job as Snap's first CFO, building a strong team and helping to guide us through our transition to becoming a public company. The discipline that he has brought to our business will serve us well into the future. We wish Drew continued success and all the best.”
According to CNBC, Stone’s salary will be $500,000 and he will receive restricted stock units with a value of $20 million and 500,000 in options subject to time-based vesting.
Redesign Rollback Begins
Snap is starting to roll back its redesign on the Snapchat app. The redesigned Snapchat app was not very popular as over 1.2 million people signed a petition to go back to the original design.
The latest design makes Snaps and Chats show up in chronological order again. And the Stories have been moved back to the right-hand side of the app again. One thing that will be retained from the redesign is that Stories from your friends will be separated from brands. And there is a separate Subscriptions feed which can be searched.
The updated design will be coming to iOS first. But it is unknown when the rollback will happen on Android.
Encrypted Messaging Feature
Twitter is believed to be testing an encrypted messaging feature that would compete against services like WhatsApp, Telegram and Signal. Here is a tweet that Jane Manchun Wong wrote about the rumored service:
Facebook And Instagram Videos Are Now Playable Within The App
WhatsApp now has ability to play Facebook and Instagram videos within the app. So when your contacts send you these types of videos, you can watch it without having to leave WhatsApp. WhatsApp already offers the ability to watch YouTube videos within the app without having to switch over to the YouTube app.
This feature is already available in the iOS version of WhatsApp, but it is not available on the Android version yet. The updated iOS version of WhatsApp gives admins the ability to provide/revoke certain rights for other users in the group such as the ability to rename a group.
YouTube
$5 Million For Creators For Change
YouTube is investing about $5 million for the “Creators for Change” program, which will be provided to 47 creators. Of the 47 creators, 31 are new members. The creators will be sharing positive videos about global issues such as hate speech and xenophobia.
“As part of our $5M investment in this program, these creators will receive support from YouTube through a combination of project funding, mentorship opportunities, and ongoing production assistance at our YouTube Spaces,” said YouTube in a blog post. “They’ll also join us for our second annual Social Impact Camp at YouTube Space London this summer. The Social Impact Camp is an exclusive two-day-long camp featuring inspirational speakers, video production workshops, and mentorship opportunities with experts as well as time for the Ambassadors to connect with one another.”
“Take A Break” Notifications
If you find yourself spending a lot of time on YouTube, then you will be able to set up a suggestions to take a break. This is part of Google’s broader Digital Wellbeing initiative. YouTube’s “take a break” notifications shows a prompt when you have spent too much time on the service.
You can access this feature by tapping on your profile photo at the top right of the mobile app > Settings > General > “Remind me to take a break.” From there, you can select the choices: Never, every 15 minutes, every 30 minutes, every 60 minutes, every 90 minutes or every 180 minutes.
Read More:
- May 6, 2018 - Social Media Roundup: Facebook Dating Feature, Instagram Video Chat, WhatsApp Group Video Calls
- April 29, 2018 - Social Media Roundup: Facebook Apps Get Restricted, Instagram Data Download, New Snapchat Spectacles
- April 22, 2018 - Social Media Roundup: Facebook External Tracking, Instagram Explore Redesign, New Snapchat Filters
What are your thoughts on this article? Send me a tweet at @amitchowdhry or connect with me on LinkedIn.
What is this?
Ignite Your GPU Database Strategy By Addressing GDPR
Nima Negahban , Kinetica
With just over a month until the European Union’s (EU) General Data Protection Regulation
(GDPR) goes into effect, Facebook is moving its data controller entity
from Facebook Ireland to Facebook USA, keeping more than 1.5 billion
users out of the reach of the European privacy law. Mark Zuckerberg, who
promised to apply the “spirit” of the legislation globally, is moving
users located in Africa, Asia, Australia, and Latin America to sites
governed by US law rather than European law.
Clearly, May 25—the day GDPR goes
into effect—is a pivotal day that will have a global ripple effect well
beyond Europe. It will impact how we manage and use data across the
world in the Extreme Data Economy. It does not, however, need to be viewed as a regulatory “tax” to avoid. As companies embrace business differentiating innovations, such as GPU databases, they can simultaneously meet the key requirements of GDPR.
Need a primer on a GPU database? Read a quick overview here.
The GDPR “was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy, and to reshape the way organizations across the region approach data privacy.” GDPR covers the entire EU and explicitly states that companies that fail to comply with the regulation are subject to a penalty up to 20 million euro, or 4% of global revenue, whichever is greater.
A major misconception is that the regulation applies to EU companies only; in actuality, the regulation applies to any company holding data from EU citizens.
With regards to an enterprise data strategy, there are a number of key considerations that must be addressed, including data profiling, the right to be forgotten, automated personal data processing, data pseudonymization, and data breaches. Each of these areas demands healthy consideration, balancing privacy concerns against innovation.
The GDPR exists because enterprises have not been thoughtful enough around data privacy, forcing governments (like the EU) to mandate change. Many of their offenses are much less dramatic than the salacious stories around companies like Facebook and Cambridge Analytica.
The GDPR forces us to think creatively about how to reconstitute the business to comply with regulation. Savvy enterprises will figure out how to meet these requirements by combining these efforts with new data innovation investments.
For instance, NVIDIA (NVDA) GPUs are redefining how companies translate data into insight, leveraging the massive parallel processing power of GPUs rather than CPUs. This has created a new category of GPU infrastructure, including a GPU database, to revolutionize data practices.
From a business perspective, GPU database technology accomplishes several things. A GPU database dramatically accelerates analysis of billions of rows of data, with an in-memory GPU architecture that speeds parallel processing. It can deliver results in milliseconds. It provides near-linear scalability without the need to index. It can take geospatial and streaming data and turn it into visualizations that reveal interesting patterns and business opportunities, capitalizing on the GPU’s particular aptitudes, including rendering the visuals themselves. GPU databases have seamless machine learning capabilities, enabling organizations to easily leverage Google’s popular Tensorflow and other AI frameworks via User Defined Functions that analyze the complete set of data. In short, the GPU foundation is a massive opportunity to build a data-powered architecture that not only allows businesses to do more with data, but also helps align with GDPR regulations.
A GPU database can also help a business comply with GDPR regulation:
- Breach Notification. A key requirement of GDPR is for a business to notify relevant authorities of data breaches within 72 hours of becoming aware of an attack. GPU databases arm businesses with the ability to do brute force analysis of billions of rows of data in real-time. The power of the GPU database is the ability to not only look at batch data, but also real-time streaming data. It provides organizations with blazing-fast analytics, the ability to conduct more complex analysis than traditional BI tools, and a “bigger brain” to run machine learning algorithms across constantly changing data sources. In short, the GPU database provides a more powerful means to assess risk of breach, making it easier to identify breaches and remediate within shorter periods of time.
- Bias & Profiling. The GDPR prohibits using personal data that “reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.” Data scientists analyzing data involving personal data can no longer work on homegrown data science platforms built “off the grid.” Given that a GPU database architecture enables data scientists to access a centralized engine where data is managed, businesses can eliminate data science sprawl and implement a centralized data architecture and workflow for governance.
- Data Lineage & Auditability. Under GDPR, data scientists must be able to identify where data is generated and provide an audit trail of where it resides. With a GPU database architecture, data can be assigned a unique identifier and an audit trail can be produced identifying the in-memory GPU where data is pinned. This enables businesses to track the data lifecycle and maintain a comprehensive audit trail of where it was used.
- 360-Degree View of Business. In order to meet GDPR obligations, you need to know, at all times, what sensitive data you are collecting and all the places it is stored. A GPU-database allows companies to visualize, analyze, and generate insight around batch data, streaming data, IoT data, location-based data, and many other unpredictable sources. The ability to visualize the business in motion is critical to understanding how data is used across all divisions. This 360 degree view is critical to properly understanding an organization’s holistic data strategy and to identify anomalies. It also enables a business to more easily watch and track incoming personal data to address key GDPR requirements such as the right to be forgotten. Given the complexity of GDPR, it is critical that businesses paint a picture of where data is used and resides so they have the agility to address GDPR issues as they arise.
- Reduce attack surface with GPUs. A single NVIDIA Deep Learning System has 81,920 CUDA cores. The equivalent number of cores on a CPU would require 1,280 servers (81,920/64). The wider your attack surface for managing data, the more complex and challenging it is to meet GDPR requirements. Using GPUs to drive data consolidation simplifies the data architecture and makes it easier to be GDPR compliant.
What You Need To Know About The American Idol Live! 2018 Tour
Get your
rowdy cheers ready, because American Idol is likely coming to a city
near you. Following a long-awaited return to television in 2018 after a
two-year hiatus (and a network change from Fox to ABC), the popular show
is taking a summer road trip with the American Idol Live! 2018 tour. The 40+ city tour kicks off on Wednesday, July 11 in Redding, CA and wraps up on Sunday, September 16 in Washington DC.
The tour gives fans the chance to experience the talented vocals of this season’s 7 finalists Cade Foehner, Caleb Lee Hutchinson, Catie Turner, Gabby Barrett, Jurnee, Maddie Poppe and Michael J. Woodard in an electrifying live setting. The shows will also showcase 2018’s newly-crowned winner, and will be hosted by special guest Kris Allen, who many die-hard fans recognize as Season 8’s American Idol winner. To add to the excitement, In Real Life, winner of ABC's 2017 summer reality competition show Boy Band, will join in the fun on select dates. In
Real Life currently has three hot singles on the airwaves: "Eyes
Closed," "Tattoo (How 'Bout You)" and their first Spanish track "How
Badly.”American Idol has had an unbelievable run since first premiering on Fox in 2002. Its first 15 seasons on television attracted more than 40 million live viewers at one point, who tuned in every week to watch the show transform everyday Americans - albeit with incredible vocal gifts - from obscurity to stardom. Winners Carrie Underwood and Kelly Clarkson, and finalists Jennifer Hudson and Chris Daughtry are just a few of the show’s contestants who skyrocketed to fame to become household names after appearing on the show. However, the last time American Idol went on a live tour was in 2015, highlighting winner Nick Fradiani. This year’s tour will be managed by Jared Paul, a seasoned entertainment manager whose clients include New Kids on the Block. Paul has produced several touring productions of former television shows like “Glee,” “Dancing with the Stars” and “America’s Got Talent,” and will bring his management experience to the production of this year’s American Idol Live! 2018 tour.
Tickets went on sale Friday. Check out StubHub for tickets, but hurry, the dates will sell out fast.
For product reviews, gift ideas, and latest deals, Subscribe to the Forbes Finds newsletter.
News Tip
News Tip
WhatsApp, Messenger and Facebook’s core app are getting new leaders as part of a massive executive reorg.
By
____________________________________
Facebook Replaces Lobbying Executive Amid Regulatory Scrutiny
WASHINGTON
— Facebook on Tuesday replaced its head of policy in the United States,
Erin Egan, as the social network scrambles to respond to intense
scrutiny from federal regulators and lawmakers.
Ms.
Egan, who is also Facebook’s chief privacy officer, was responsible for
lobbying and government relations as head of policy for the last two
years. She will be replaced by Kevin Martin on an interim basis, the
company said. Mr. Martin has been Facebook’s vice president of mobile
and global access policy and is a former Republican chairman of the
Federal Communications Commission.
Ms.
Egan will remain chief privacy officer and focus on privacy policies
across the globe, Andy Stone, a Facebook spokesman, said.
The
executive reshuffling in Facebook’s Washington offices followed a
period of tumult for the company, which has put it increasingly in the
spotlight on Capitol Hill. Last month, The New York Times and others
reported that the data of millions of Facebook users had been harvested by the British political research firm Cambridge Analytica. The ensuing outcry led Facebook’s chief executive, Mark Zuckerberg, to testify at two congressional hearings this month.
Since the revelations about Cambridge Analytica, the Federal Trade Commission has started an investigation
of whether Facebook violated promises it made in 2011 to protect the
privacy of users, making it harder for the company to share data with
third parties.
At the same time,
Facebook is grappling with increased privacy regulations outside the
United States. Sweeping new privacy laws called the General Data Protection Regulation
are set to take effect in Europe next month. And Facebook has been
called to talk to regulators in several countries, including Ireland,
Germany and Indonesia, about its handling of user data.
Mr.
Zuckerberg said told Congress this month that Facebook had grown too
fast and that he hadn’t foreseen the problems the platform would
confront.
“Facebook is an idealistic
and optimistic company,” he said. “For most of our existence, we focused
on all the good that connecting people can bring.”
The executive shifts put two Republican men in charge of Facebook’s Washington offices. Mr. Martin will report to Joel Kaplan, vice president of global public policy. Mr. Martin and Mr. Kaplan worked together in the George W. Bush White House and on Mr. Bush’s 2000 presidential campaign.
Facebook
hired Ms. Egan in 2011; she is a frequent headliner at tech policy
events in Washington. Before joining Facebook, she spent 15 years as a
partner at the law firm Covington & Burling as co-chairwoman of the
global privacy and security group.
Facebook
is undergoing other executive changes. Last month, The Times reported
that Alex Stamos, Facebook’s chief information security officer, planned to leave the company after disagreements over how to handle misinformation on the site.
_____________________________________
Google Knows Even More About Your Private Life Than Facebook
________________________________________
Facebook releases long-secret rules on how it polices the service
MENLO PARK, Calif. (Reuters) - Facebook Inc (FB.O)
on Tuesday released a rule book for the types of posts it allows on its
social network, giving far more detail than ever before on what is
permitted on subjects ranging from drug use and sex work to bullying,
hate speech and inciting violence.
Now, the company is providing the longer document on its website to clear up confusion and be more open about its operations, said Monika Bickert, Facebook’s vice president of product policy and counter-terrorism.
“You should, when you come to Facebook, understand where we draw these lines and what’s OK and what’s not OK,” Bickert told reporters in a briefing at Facebook’s headquarters.
Facebook has faced fierce criticism from governments and rights groups in many countries for failing to do enough to stem hate speech and prevent the service from being used to promote terrorism, stir sectarian violence and broadcast acts including murder and suicide.
At the same time, the company has also been accused of doing the bidding of repressive regimes by aggressively removing content that crosses governments and providing too little information on why certain posts and accounts are removed.
New policies will, for the first time, allow people to appeal a decision to take down an individual piece of content. Previously, only the removal of accounts, Groups and Pages could be appealed.
Facebook is also beginning to provide the specific reason why content is being taken down for a wider variety of situations.
Facebook, the world’s largest social network, has become a dominant source of information in many countries around the world. It uses both automated software and an army of moderators that now numbers 7,500 to take down text, pictures and videos that violate its rules. Under pressure from several governments, it has been beefing up its moderator ranks since last year.
Bickert told Reuters in an interview that the standards are constantly evolving, based in part on feedback from more than 100 outside organizations and experts in areas such as counter-terrorism and child exploitation.
“Everybody should expect that these will be updated frequently,” she said.
The company considers changes to its content policy every two weeks at a meeting called the “Content Standards Forum,” led by Bickert. A small group of reporters was allowed to observe the meeting last week on the condition that they could describe process, but not substance.
At the April 17 meeting, about 25 employees sat around a conference table while others joined by video from New York, Dublin, Mexico City, Washington and elsewhere.
Attendees included people who specialize in public policy, legal matters, product development, communication and other areas. They heard reports from smaller working groups, relayed feedback they had gotten from civil rights groups and other outsiders and suggested ways that a policy or product could go wrong in the future. There was little mention of what competitors such as Alphabet Inc’s Google (GOOGL.O) do in similar situations.
Bickert, a former U.S. federal prosecutor, posed questions, provided background and kept the discussion moving. The meeting lasted about an hour.
Facebook is planning a series of public forums in May and June in different countries to get more feedback on its rules, said Mary deBree, Facebook’s head of content policy.
FROM CURSING TO MURDER
The longer version of the community standards document, some 8,000 words long, covers a wide array of words and images that Facebook sometimes censors, with detailed discussion of each category.Videos of people wounded by cannibalism are not permitted, for instance, but such imagery is allowed with a warning screen if it is “in a medical setting.”
Facebook has long made clear that it does not allow people to buy and sell prescription drugs, marijuana or firearms on the social network, but the newly published document details what other speech on those subjects is permitted.
Content in which someone “admits to personal use of non-medical drugs” should not be posted on Facebook, the rule book says.
The document elaborates on harassment and bullying, barring for example “cursing at a minor.” It also prohibits content that comes from a hacked source, “except in limited cases of newsworthiness.”
In those cases, Bickert said, formal written requests are required and are reviewed by Facebook’s legal team and outside attorneys. Content deemed to be permissible under community standards but in violation of local law - such as a prohibition in Thailand on disparaging the royal family - are then blocked in that country, but not globally.
The community standards also do not address false information - Facebook does not prohibit it but it does try to reduce its distribution - or other contentious issues such as use of personal data.
________________________________________
Facebook may face billions in fines over its Blocking Tag features
A federal judge ruled in favor of a class action lawsuit certification
By
Facebook could face billions of dollars in fines after a federal judge ruled
that the company must face a class action lawsuit. The lawsuit alleges
that Facebook’s facial recognition features violate Illinois law by
storing biometric data without user consent.
The lawsuit involves Facebook’s Tag Suggestions tool,
which identifies users in uploaded photos and suggests automatic tagging
of your friends. The feature was launched on June 7th, 2011. According
to the suit, the complainants allege that Facebook “collects and stores
their biometric data without prior notice or consent in violation of
their privacy rights.” Illinois’ Biometric Information Privacy Act (BIPA) requires explicit consent before companies can collect biometric data like fingerprints or facial recognition profiles.
It should be noted that Facebook has since also added a more direct notification
alerting users to its facial recognition features, but this lawsuit is
based on the earlier collection of user data. With the order, millions
of the social network’s users could collectively sue the company, with
violations of BIPA incurring a fine of between $1,000 to $5,000 each time someone’s image is used without permission.
In the court order, Judge James Donato wrote:
“A class action is clearly superior to individual proceedings here. While not trivial, BIPA’s statutory damages are not enough to incentivize individual plaintiffs given the high costs of pursuing discovery on Facebook’s software and code base and Facebook’s willingness to litigate the case...Facebook seems to believe that a class action is not superior because statutory damages could amount to billions of dollars.”
The Tag Suggestion feature works in four steps: software
tries to detect the faces in uploaded photos. Once detected, Facebook
computes a “face signature” — a series of numbers that “represents a
particular image of a face” based on your photo — and a “face template”
database that the system uses to search face signatures for a match. If
the face signature matches, Facebook then suggests the tag. Facebook
doesn’t store face signatures and only keeps face templates.
Facebook says its automatic tagging feature detects 90
percent of faces in photos. The lawsuit claims about 76 percent of faces
in the photos have face signatures computed. Tag suggestions are
available in limited markets. It is primarily offered for users in the
US with the option to turn the feature off.
A lawyer for Facebook users, Shawn Williams, told Bloomberg:
“As more people become aware of the scope of Facebook’s data collection and as consequences begin to attach to that data collection, whether economic or regulatory, Facebook will have to take a long look at its privacy practices and make changes consistent with user expectations and regulatory requirements,” he said.
Facebook also launched a new feature back in December that notifies users when someone uploads a photo of them, even if they’re not tagged. In a statement to The Verge, Facebook
said, “We are reviewing the ruling. We continue to believe the case has
no merit and will defend ourselves vigorously.” Facebook also says it
has always been upfront about how the tag function works, and users can easily turn it off if they wish.
________________________________________
Facebook points finger at Google and Twitter for data collection
“Other companies suck in your data too,” Facebook explained in many, many words today with a blog post detailing how it gathers information about you from around the web.
Facebook product management director David Baser wrote, “Twitter, Pinterest and LinkedIn all have similar Like and Share buttons to help people share things on their services. Google has a popular analytics service. And Amazon, Google and Twitter all offer login features. These companies — and many others — also offer advertising services. In fact, most websites and apps send the same information to multiple companies each time you visit them.” Describing how Facebook receives cookies, IP address, and browser info about users from other sites, he noted, “when you see a YouTube video on a site that’s not YouTube, it tells your browser to request the video from YouTube. YouTube then sends it to you.”
It seems Facebook is tired of being singled-out. The tacked on “them too!” statements at the end of its descriptions of opaque data collection practices might have been trying to normalize the behavior, but comes off feeling a bit petty.
The blog post also fails to answer one of the biggest lines of questioning from CEO Mark Zuckerberg’s testimonies before Congress last week. Zuckerberg was asked by Representative Ben Lujan about whether Facebook constructs “shadow profiles” of ad targeting data about non-users.
Today’s blog post merely notes that “When you visit a site or app that uses our services, we receive information even if you’re logged out or don’t have a Facebook account. This is because other apps and sites don’t know who is using Facebook. Many companies offer these types of services and, like Facebook, they also get information from the apps and sites that use them.”
Facebook has a lot more questions to answer about this practice, since most of its privacy and data controls are only accessible to users who’ve signed up.
Whenever
a company may be guilty of something, from petty neglect to grand
deception, there’s usually a class action lawsuit filed. But until a
judge rules that lawsuit legitimate, the threat remains fairly empty.
Unfortunately for Facebook, one major suit from 2015 has just been given that critical go-ahead.
The case concerns an Illinois law that prohibits collection of biometric information, including facial recognition data, in the way that Facebook has done for years as part of its photo-tagging systems.
BIPA, the Illinois law, is a real thorn in Facebook’s side. The company has not only been pushing to have the case dismissed, but it has been working to have the whole law changed by supporting an amendment that would defang it — but more on that another time.
(Update: Although Facebook’s own Manger of State Policy Daniel Sachs co-chairs a deregulatory tech council in the Illinois Chamber of Commerce that proposed the amendment, the company maintains that “We have not taken any position on the proposed legislation in Illinois, nor have we suggested language or spoken to any legislators about it.” You may decide for yourself the merit of that claim.)
Judge James Donato in California’s Northern District has made no determination as to the merits of the case itself; first, it must be shown that there is a class of affected people with a complaint that is supported by the facts.
For now, he has found (you can read the order here) that “plaintiffs’ claims are sufficiently cohesive to allow for a fair and efficient resolution on a class basis.” The class itself will consist of “Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011.”
The Cambridge Analytica scandal emerged from Facebook being unable to enforce its policies that prohibit developers from sharing or selling data they pull from Facebook users. Yet it’s unclear whether Apple and Google do a better job at this policing. And while Facebook let users give their friends’ names and interests to Dr. Aleksandr Kogan, who sold it to Cambridge Analytica, iOS and Android apps routinely ask you to give them your friends’ phone numbers, and we don’t see mass backlash about that.
At least not yet.
Facebook product management director David Baser wrote, “Twitter, Pinterest and LinkedIn all have similar Like and Share buttons to help people share things on their services. Google has a popular analytics service. And Amazon, Google and Twitter all offer login features. These companies — and many others — also offer advertising services. In fact, most websites and apps send the same information to multiple companies each time you visit them.” Describing how Facebook receives cookies, IP address, and browser info about users from other sites, he noted, “when you see a YouTube video on a site that’s not YouTube, it tells your browser to request the video from YouTube. YouTube then sends it to you.”
It seems Facebook is tired of being singled-out. The tacked on “them too!” statements at the end of its descriptions of opaque data collection practices might have been trying to normalize the behavior, but comes off feeling a bit petty.
The blog post also fails to answer one of the biggest lines of questioning from CEO Mark Zuckerberg’s testimonies before Congress last week. Zuckerberg was asked by Representative Ben Lujan about whether Facebook constructs “shadow profiles” of ad targeting data about non-users.
Today’s blog post merely notes that “When you visit a site or app that uses our services, we receive information even if you’re logged out or don’t have a Facebook account. This is because other apps and sites don’t know who is using Facebook. Many companies offer these types of services and, like Facebook, they also get information from the apps and sites that use them.”
Facebook has a lot more questions to answer about this practice, since most of its privacy and data controls are only accessible to users who’ve signed up.
Judge says class action suit against Facebook over facial recognition can go forward
The case concerns an Illinois law that prohibits collection of biometric information, including facial recognition data, in the way that Facebook has done for years as part of its photo-tagging systems.
BIPA, the Illinois law, is a real thorn in Facebook’s side. The company has not only been pushing to have the case dismissed, but it has been working to have the whole law changed by supporting an amendment that would defang it — but more on that another time.
(Update: Although Facebook’s own Manger of State Policy Daniel Sachs co-chairs a deregulatory tech council in the Illinois Chamber of Commerce that proposed the amendment, the company maintains that “We have not taken any position on the proposed legislation in Illinois, nor have we suggested language or spoken to any legislators about it.” You may decide for yourself the merit of that claim.)
Judge James Donato in California’s Northern District has made no determination as to the merits of the case itself; first, it must be shown that there is a class of affected people with a complaint that is supported by the facts.
For now, he has found (you can read the order here) that “plaintiffs’ claims are sufficiently cohesive to allow for a fair and efficient resolution on a class basis.” The class itself will consist of “Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011.”
The data privacy double-standard
That said, other tech companies have gotten off light. Whether it’s because Apple and Google aren’t CEO’d by their founders any more, or we’ve grown to see iOS and Android as such underlying platforms that they aren’t responsible for what third-party developers do, scrutiny has focused on Zuckerberg and Facebook.The Cambridge Analytica scandal emerged from Facebook being unable to enforce its policies that prohibit developers from sharing or selling data they pull from Facebook users. Yet it’s unclear whether Apple and Google do a better job at this policing. And while Facebook let users give their friends’ names and interests to Dr. Aleksandr Kogan, who sold it to Cambridge Analytica, iOS and Android apps routinely ask you to give them your friends’ phone numbers, and we don’t see mass backlash about that.
At least not yet.
How Facebook’s Past Data Policy Has Come Back to Haunt It
A video explaining how policies started back in 2007 led to misuse of data by Cambridge Analytica....
Mark Zuckerberg’s Mission: Stay Cool in a Very Hot Seat
Facebook CEO Mark Zuckerberg faces lawmakers this week in what are likely to be contentious hearings about privacy that will be a broader test of how effectively he can guide his social-media giant.
Mark Zuckerberg’s Washington Mission: Stay Cool in a Very Hot Seat
The Facebook chief will be tested as he appears before Congress about privacy issues
By Betsy Morris and
Deepa Seetharaman
April 8, 2018 7:34 p.m. ET
A year ago, Mark Zuckerberg was preparing to deliver the commencement speech at Harvard University. As well as a personal milestone, it was the kind of carefully choreographed, profoundly upbeat event at which he excels.
Facebook To Alert Users Affected By Cambridge Analytica Data Breach
If you were one of the 87 million Facebook users that might have been affected by the social media platform’s recent Cambridge Analytica privacy breach, then you will be getting a detailed message in your news feed starting Monday.
In the wake of the scandal, Facebook said that users who may have had their data shared with Cambridge Analytica will be getting messages this week, according to the Associated Press. In addition, in an effort to do some damage control, all Facebook users will be receiving a notice with a link to see what apps they use and what information they have shared with those apps. They will be given the option to shut off these apps or completely turn off access to third-party apps.
The ongoing Cambridge Analytica scandal has been a thorn in Mark Zuckerberg
and Facebook’s side. In March, it was reported that Cambridge
Analytica, the data firm backed by Donald Trump supporter Robert Mercer
and once steered by former Trump advisor Steve Bannon, obtained personal information from 50 million Facebook users
without permission. That data then was used to target voters and
influence the 2016 election. Of those affected, Facebook said more than
70 of the 87 million users are in the U.S. with over a million each in
the Philippines, Indonesia, and the U.K.
Zuckerberg has since acknowledged that this has been a “huge mistake”. He is set to testify before a joint session of the Senate Judiciary and Commerce Committees on April 10, then appear the next day before the House Energy and Commerce Committee, answering growing questions about data privacy and how Facebook plans to address the problem.
___________________________
In the wake of the scandal, Facebook said that users who may have had their data shared with Cambridge Analytica will be getting messages this week, according to the Associated Press. In addition, in an effort to do some damage control, all Facebook users will be receiving a notice with a link to see what apps they use and what information they have shared with those apps. They will be given the option to shut off these apps or completely turn off access to third-party apps.
Zuckerberg has since acknowledged that this has been a “huge mistake”. He is set to testify before a joint session of the Senate Judiciary and Commerce Committees on April 10, then appear the next day before the House Energy and Commerce Committee, answering growing questions about data privacy and how Facebook plans to address the problem.
___________________________
Facebook suspends another data analytics firm after CNBC discovers it was using tactics like Cambridge Analytica
- Data analytics firm CubeYou used personality quizzes clearly labeled for "non-profit academic research" to help marketers find customers.
- One of its quizzes, "You Are What You Like" which also goes by "Apply Magic Sauce," states it is only for "non-profit academic research that has no connection whatsoever to any commercial or profit-making purpose or entity."
- When CNBC showed Facebook the quizzes and terms, which are similar to the methods used by Cambridge Analytica, Facebook said it was going to suspend CubeYou from the platform to investigate.
CNBC.com
Facebook
is suspending a data analytics firm called CubeYou from the platform
after CNBC notified the company that CubeYou was collecting information
about users through quizzes.
CubeYou misleadingly labeled its quizzes "for non-profit academic research," then shared user information with marketers. The scenario is eerily similar to how Cambridge Analytica received unauthorized access to data from as many as 87 million Facebook user accounts to target political marketing.
The company sold data that had been collected by researchers working with the Psychometrics Lab at Cambridge University, similar to how Cambridge Analytica used information it obtained from other professors at the school for political marketing.
The CubeYou discovery suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident. Moreover, the fact that CubeYou was able to mislabel the purpose of the quizzes — and that Facebook did nothing to stop it until CNBC pointed out the problem — suggests the platform has little control over this activity.
Facebook, however, disputed the implication that it can't exercise proper oversight over these types of apps, telling CNBC that it can't control information that companies mislabel. Upon being notified of CubeYou's alleged violations, Facebook said it would suspend all CubeYou's apps until a further audit could be completed.
"These are serious claims and we have suspended CubeYou from Facebook while we investigate them," Ime Archibong, Facebook vice president of product partnerships, said in a statement.
"If they refuse or fail our audit, their apps will be banned from Facebook. In addition, we will work with the UK ICO [Information Commissioner's Office] to ask the University of Cambridge about the development of apps in general by its Psychometrics Centre given this case and the misuse by Kogan," he said. Aleksander Kogan was the researcher who built the quiz used by Cambridge Analytica.
"We want to thank CNBC for bringing this case to our attention," Archibong added.
The revelation comes as Facebook CEO Mark Zuckerberg prepares to answer questions before Congress this week stemming from the Cambridge Analytica scandal. The Senate Commerce and Judiciary committees and the House Energy and Commerce Committee are expected to quiz him on what the site is doing to enhance user privacy, and prevent foreign actors from using Facebook to meddle in future elections.
Since the Cambridge Analytica scandal erupted, Facebook CEO Mark Zuckerberg has claimed personal responsibility for the data privacy leaks, and the company has launched several initiatives to increase user control over their data.
CubeYou misleadingly labeled its quizzes "for non-profit academic research," then shared user information with marketers. The scenario is eerily similar to how Cambridge Analytica received unauthorized access to data from as many as 87 million Facebook user accounts to target political marketing.
The company sold data that had been collected by researchers working with the Psychometrics Lab at Cambridge University, similar to how Cambridge Analytica used information it obtained from other professors at the school for political marketing.
The CubeYou discovery suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident. Moreover, the fact that CubeYou was able to mislabel the purpose of the quizzes — and that Facebook did nothing to stop it until CNBC pointed out the problem — suggests the platform has little control over this activity.
Facebook, however, disputed the implication that it can't exercise proper oversight over these types of apps, telling CNBC that it can't control information that companies mislabel. Upon being notified of CubeYou's alleged violations, Facebook said it would suspend all CubeYou's apps until a further audit could be completed.
"These are serious claims and we have suspended CubeYou from Facebook while we investigate them," Ime Archibong, Facebook vice president of product partnerships, said in a statement.
"If they refuse or fail our audit, their apps will be banned from Facebook. In addition, we will work with the UK ICO [Information Commissioner's Office] to ask the University of Cambridge about the development of apps in general by its Psychometrics Centre given this case and the misuse by Kogan," he said. Aleksander Kogan was the researcher who built the quiz used by Cambridge Analytica.
"We want to thank CNBC for bringing this case to our attention," Archibong added.
The revelation comes as Facebook CEO Mark Zuckerberg prepares to answer questions before Congress this week stemming from the Cambridge Analytica scandal. The Senate Commerce and Judiciary committees and the House Energy and Commerce Committee are expected to quiz him on what the site is doing to enhance user privacy, and prevent foreign actors from using Facebook to meddle in future elections.
Since the Cambridge Analytica scandal erupted, Facebook CEO Mark Zuckerberg has claimed personal responsibility for the data privacy leaks, and the company has launched several initiatives to increase user control over their data.
Meet CubeYou
CubeYou boasts on its web site that it uses census data and various web and social apps on Facebook and Twitter
to collect personal information. CubeYou then contracts with
advertising agenices who want to target certain types of Facebook users
for ad campaigns.
CubeYou's site says it has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints.
On a cached version of its web site from March 19, it also said it keeps age, gender, location, work and education, and family and relationship information. It also has likes, follows, shares, posts, likes to posts, comments to posts, check-ins and mentions of brands/celebrities in a post. Interactions with companies are tracked back to 2012 and are updated weekly, the site said.
"This PII information of our panelists is used to verify eligibility (we do not knowingly accept panelists under the age of 18 in our panel), then match and/or fuse other online and offline data sources to enhance their profiles," CubeYou wrote.
The company's web site currently claims it has more than 10 million opted-in panelists, but the cached March 19 version said it had "an unbiased panel of more than 45 million people globally." (Click the images in this story to make them bigger.)
CubeYou's site says it has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints.
On a cached version of its web site from March 19, it also said it keeps age, gender, location, work and education, and family and relationship information. It also has likes, follows, shares, posts, likes to posts, comments to posts, check-ins and mentions of brands/celebrities in a post. Interactions with companies are tracked back to 2012 and are updated weekly, the site said.
"This PII information of our panelists is used to verify eligibility (we do not knowingly accept panelists under the age of 18 in our panel), then match and/or fuse other online and offline data sources to enhance their profiles," CubeYou wrote.
The company's web site currently claims it has more than 10 million opted-in panelists, but the cached March 19 version said it had "an unbiased panel of more than 45 million people globally." (Click the images in this story to make them bigger.)
CubeYou collected a lot of this data through online apps that are meant to be entertaining or fun.
An ad agency exec who met with the company confirmed CubeYou said it mostly collects information through quizzes.
According to its web site, one of CubeYou's "most viral apps" is a Facebook quiz created in conjunction with the University of Cambridge called "You Are What You Like." It is meant "to predict a user's personality based on the pages s/he liked on Facebook."
Two versions of this app still were active on Facebook as of Sunday morning. The most recent version of this app has been renamed "Apply Magic Sauce," (YouAreWhatYouLike.com redirects to ApplyMagicSauce.com), and existed on the platform as recently as Sunday morning. Another version still called "You Are What You Like" is also available.
An ad agency exec who met with the company confirmed CubeYou said it mostly collects information through quizzes.
According to its web site, one of CubeYou's "most viral apps" is a Facebook quiz created in conjunction with the University of Cambridge called "You Are What You Like." It is meant "to predict a user's personality based on the pages s/he liked on Facebook."
Two versions of this app still were active on Facebook as of Sunday morning. The most recent version of this app has been renamed "Apply Magic Sauce," (YouAreWhatYouLike.com redirects to ApplyMagicSauce.com), and existed on the platform as recently as Sunday morning. Another version still called "You Are What You Like" is also available.
When a user clicks on the "App Terms" link
for the Apply Magic Sauce app, it links to a page saying that the
information collected through the quiz is intended for "non-exclusive access for research purposes only"
and only for "non-profit academic research that has no connection
whatsoever to any commercial or profit-making purpose or entity."
After CNBC contacted Facebook for this story,
Facebook said there were two previous versions of the app named "You Are
What You Like," one created in 2013, which was deleted by the
developer, and one submitted later in 2013.
Both of those prior versions had similar disclaimers on Facebook about being used for academic research purposes.
In addition, those prior versions were able to get access to information from friends of the people who took the quiz -- as also happened in the Cambridge Analytica case. Until 2015, Facebook allowed developers to access information on Facebook friends as long as the original app user opted-in, a loophole that expanded the database of personal information considerably.
If the original user still remained opted in, CubeYou could theoretically still access their data to this day.
Both of those prior versions had similar disclaimers on Facebook about being used for academic research purposes.
In addition, those prior versions were able to get access to information from friends of the people who took the quiz -- as also happened in the Cambridge Analytica case. Until 2015, Facebook allowed developers to access information on Facebook friends as long as the original app user opted-in, a loophole that expanded the database of personal information considerably.
If the original user still remained opted in, CubeYou could theoretically still access their data to this day.
CubeYou and Cambridge U's response
When reached for comment, CubeYou CEO Federico Treu
said the company was involved with developing the app and website, but
only worked with Cambridge University from December 2013 to May 2015.
It only collected data from that time and has not had access since June 2015 to data from new people who have taken the quiz, Treu said
He also pointed out that the YouAreWhatYouLike.com website has different -- and looser -- terms of usage than the Facebook terms that CNBC discovered.
The web site says, "the information you submit to You Are What You Like may be stored and used for academic and business purposes, and also disclosed to third parties, including for example (but not limited to) research institutions. Any disclosure will be strictly in an anonymous format, such that the information can never be used to identify you or any other individual user." (Italics added by CNBC.)
He also denied CubeYou has access to friends' data if a user opted in, and said it only connects friends who have opted into the app individually.
Cambridge University said CubeYou's involvement was limited to developing a website.
"We were not aware of Cubeyou's claims on their blog," the University of Cambridge Psychometrics Center said in a statement.
"Having had a look now, several of these appear to be misleading and we will contact them to request that they clarify them. For example, we have not collaborated with them to build a psychological prediction model -- we keep our prediction model secret and it was already built before we started working with them," the institution said.
"Our relationship was not commercial in nature and no fees or client projects were exchanged. They just designed the interface for a website that used our models to give users insight on their [the users'] data. Unfortunately collaborators with the University of Cambridge sometimes exaggerate their connection to Cambridge in order to gain prestige from its academics' work," it added.
It only collected data from that time and has not had access since June 2015 to data from new people who have taken the quiz, Treu said
He also pointed out that the YouAreWhatYouLike.com website has different -- and looser -- terms of usage than the Facebook terms that CNBC discovered.
The web site says, "the information you submit to You Are What You Like may be stored and used for academic and business purposes, and also disclosed to third parties, including for example (but not limited to) research institutions. Any disclosure will be strictly in an anonymous format, such that the information can never be used to identify you or any other individual user." (Italics added by CNBC.)
He also denied CubeYou has access to friends' data if a user opted in, and said it only connects friends who have opted into the app individually.
Cambridge University said CubeYou's involvement was limited to developing a website.
"We were not aware of Cubeyou's claims on their blog," the University of Cambridge Psychometrics Center said in a statement.
"Having had a look now, several of these appear to be misleading and we will contact them to request that they clarify them. For example, we have not collaborated with them to build a psychological prediction model -- we keep our prediction model secret and it was already built before we started working with them," the institution said.
"Our relationship was not commercial in nature and no fees or client projects were exchanged. They just designed the interface for a website that used our models to give users insight on their [the users'] data. Unfortunately collaborators with the University of Cambridge sometimes exaggerate their connection to Cambridge in order to gain prestige from its academics' work," it added.
'A great place for us to get smart about the consumer'
CubeYou certainly claimed it was able to use this data to target Facebook users, and advertisers seem to have bought the pitch.
CubeYou's web site says its customers include global communications firm Edelman, and sports and entertainment agency Octagon. It also works with advertising agencies including 72 and Sunny (which counts Google, Adidas and Coors Light as clients), the Martin Agency (Discover, Geico, Experian), and Legacy Marketing (L'Oreal, Hilton, TGI Fridays), among others.
The site does not say which CubeYou data was used on which projects, but all agencies' testimonials talk about how CubeYou's data has allow more understanding of potential customers.
"CubeYou is a great place for us to get smart about the consumer," one customer testimonial from Legacy Marketing says. "We primarily use Mintel for our research, but there's very little consumer segmentation and I think that the greatest benefit of a tool like CubeYou is you can get highly nuanced data about demographics, psychographics and interests so easily."
CubeYou's web site says its customers include global communications firm Edelman, and sports and entertainment agency Octagon. It also works with advertising agencies including 72 and Sunny (which counts Google, Adidas and Coors Light as clients), the Martin Agency (Discover, Geico, Experian), and Legacy Marketing (L'Oreal, Hilton, TGI Fridays), among others.
The site does not say which CubeYou data was used on which projects, but all agencies' testimonials talk about how CubeYou's data has allow more understanding of potential customers.
"CubeYou is a great place for us to get smart about the consumer," one customer testimonial from Legacy Marketing says. "We primarily use Mintel for our research, but there's very little consumer segmentation and I think that the greatest benefit of a tool like CubeYou is you can get highly nuanced data about demographics, psychographics and interests so easily."
_______________________
Facebook Data on 87 Million Users May Have Been Improperly Shared
Mark Zuckerberg says he made a ‘huge mistake’ in not focusing on protecting privacy of user data
Facebook Inc. Chief Executive Officer Mark Zuckerberg said Wednesday that he made a “huge mistake” in not focusing more on potential abuse of users’ personal information, as the social-media giant he founded revealed that data breaches were far more extensive than previously known.
Facebook to Check Groups Behind ‘Issue Ads’
Move aims to prevent the spread of misinformation
Facebook Inc. will soon require that advertisers wanting to run ads on hot-button political issues go through an authorization process first, a move the social network hopes will prevent the spread of misinformation across its platform.
U.S., States Step Up Pressure on Facebook
The attorneys general of 37 states and territories escalate a backlash that has shaken the social-media giant
Government officials ratcheted up pressure Monday on Facebook Inc.
over its handling of user data, with federal regulators saying they are
investigating the social-media giant’s privacy policies and 37 state
attorneys general demanding explanations for its practices.
The Federal Trade Commission, in a statement, signaled that its probe of Facebook is broad. Tom Pahl, a top FTC official, said the commission “takes very seriously” recent reports raising “substantial concerns about the privacy practices of Facebook.”
The Federal Trade Commission, in a statement, signaled that its probe of Facebook is broad. Tom Pahl, a top FTC official, said the commission “takes very seriously” recent reports raising “substantial concerns about the privacy practices of Facebook.”
Facebook and Google Face Emboldened Antagonists: Big Advertisers
Latest uproar over voter profiling data follows company demands for more control, more transparency from tech giants
Add to the list of people frustrated with Facebook Inc. and Google a
quiet but hugely influential group—the people who pay the bills.
In the past year and a half, the two firms have had one run-in after another with advertisers. Procter & Gamble Co. was among many companies that boycotted Google’s YouTube when they discovered ads were running before extremist and racist videos.
In the past year and a half, the two firms have had one run-in after another with advertisers. Procter & Gamble Co. was among many companies that boycotted Google’s YouTube when they discovered ads were running before extremist and racist videos.
Facebook is losing control With Big Advertisers Facebook is no longer considers data security platform from muslim minority terrorists groups... And Facebook Advertisers are pulling out of there facebook contract Accounts.
___________________________________
Facebook is about to tell users if their data was shared with Cambridge Analytica
Facebook on Monday will begin alerting the 87 million users whose data may have been harvested by Cambridge Analytica.
The company plans to post a link at the top of users' news feeds that
will allow them to see which apps are connected to their Facebook
accounts and what information those apps are permitted to see.
"As part of this process we will also tell people if their information
may have been improperly shared with Cambridge Analytica," the company said last week.
Facebook users will also have the opportunity to use the link to delete
apps and prevent them from collecting more information. Fierce backlash has confronted the company since news broke last month that Cambridge Analytica, a London-based voter analytics group, was able to obtain information about tens of millions of users.
The controversy has renewed questions about whether the world's largest social media platform does enough to protect the sensitive information it collects from users on its platform.
The data Cambridge Analytica obtained was originally collected by University of Cambridge psychology professor Aleksandr Kogan.
He used an app, called "thisisyourdigitallife," which offered a personality test. Facebook users who downloaded the app also gave it permission to collect data on their location, their friends and content they had "liked." The data collection was all completely allowable under Facebook's rules at the time.
Related: Fed up with Facebook? Here's how to protect your data
Facebook has said that Kogan violated its terms of service by passing the information on to Cambridge Analytica, a firm that was later hired to work on President Donald Trump's campaign in 2016.
Facebook banned Kogan and Cambridge Analytica from its platform last month, just before The New York Times published an investigative piece detailing how the data traded hands.
As the controversy swelled, members have used the "download a copy of your Facebook data" feature to get a glimpse of exactly what information the social network has about its users.
Many were rattled to find years worth of private texts traded on the platform's Messenger feature, code for recognizing faces in photographs, and contact information that people thought was tucked away on their cell phones.
Also ahead this week: CEO Mark Zuckerberg will face a grilling from Congress on Tuesday to discuss the data controversy.
—CNN's Charles Riley and Sara Ashley O'Brien contributed to this report.
_____________________________
Trey Gowdy wants answers from Mark Zuckerberg
Mark Zuckerberg Facebook’s CEO has been issued a court order to testify before Congress Trey Gowdy wants answers from Mark Zuckerberg The House Oversight and Government Reform Committee Chairman Trey Gowdy wants to get to the bottom Facebook policies... Advertisers are pulling out of there own contract Accounts....
No More Advertisers on facebook... Now Facebook is losing control of the narrative and there our own platform....
Facebook is losing control With Big AdvertisersFacebook is no longer considers data security platform from muslim minority terrorists groups...
"The committee is aware of numerous reports about the need Answers From Facebook at the allegations of excess cost," Targeting Conservatives Blocking The NRA Tea Party Groups wrote in a letter to Mark Zuckerberg sent Thursday. Gowdy wants a briefing no later than two weeks from Friday to go over the details of Facebook policies
Mark Zuckerberg's name is ringing across
Capitol Hill again. Politicians are demanding that the Facebook
co-founder and CEO testify to Congress in the wake of the social
network's scandal involving a data firm affiliated with the Donald Trump
campaign.
Facebook disclosed late Friday that researchers from UK-based Cambridge Analytica had duped the social networking giant and gained access to data from more than 50 million Facebook users through an app called "thisisyourdigitallife," which was then used for political ads during the 2016 presidential election.
Facebook said in a statement Friday
that it had banned the group, but the political pressure on the massive
social network is just beginning. By Monday morning, multiple senators
were demanding that Zuckerberg testify before Congress.
An appearance from Zuckerberg could potentially offer answers at a time when Facebook has gotten into hot water over its involvement with the distribution of Russia-made ads and posts on its network. But it's unclear whether it'll happen.
An appearance from Zuckerberg could potentially offer answers at a time when Facebook has gotten into hot water over its involvement with the distribution of Russia-made ads and posts on its network. But it's unclear whether it'll happen.
While the government has summoned Facebook multiple times, the CEO
has never testified on these issues. In the past, Facebook has sent its general counsel Colin Stretch; Monika Bickert, its head of global policy management; and other executives not named Mark Zuckerberg.
The Minnesota Democrat added to her demand on Monday morning, telling NPR's Morning Edition that Zuckerberg needs to speak for Facebook's flaws.
"They have not come before us, they've given it to their lobbyists and their lawyers, and we think that they need to take responsibility for what's going on," Klobuchar said. "I don't know why this CEO, even though he's super famous and has made a lot of money, why he also doesn't have to come before the committee."
She pointed out that multiple CEOs have testified to Congress in the past, and said the chances of Zuckerberg appearing increase if more politicians call for it.
Responding to a request for comment, Facebook didn't address whether Zuckerberg would be willing to testify before Congress.
"We are in the process of conducting a comprehensive internal and external review as we work to determine the accuracy of the claims that the Facebook data in question still exists," said Paul Grewal, Facebook vice president and deputy general counsel. "That is where our focus lies as we remain committed to vigorously enforcing our policies to protect people's information."
Klobuchar isn't the only one speaking out. The Federal Election Commission on Monday also called for Zuckerberg, as well as Larry Page, CEO of Google parent Alphabet, and Twitter CEO Jack Dorsey to testify at a public hearing set for June 27.
"Your perspective would be of great value to the Commission and to the nation," Ellen Weintraub, the FEC's vice chair, said in her letter to Zuckerberg.
In a joint letter with Klobuchar, Sen. John Kennedy, a Republican from Louisiana, has also called for Zuckerberg to testify before Congress, and asked Senate Judiciary Committee Chairman Sen. Chuck Grassley, a Republican from Iowa, to call for a hearing.
"While this Committee's Subcommittee on Crime and Terrorism convened a hearing with witnesses representing Facebook, Twitter, and Google in October of 2017, we have yet to hear from the leaders of these companies directly," Kennedy and Klobuchar wrote.
The letter also asks that the CEOs from Google and Twitter testify.
In response to the letter from Klobuchar and Kennedy, a spokeswoman for Grassley said the senator's taking the request under consideration: "At this point, no decision has been made on whether to hold such a hearing or whether it would occur at the full committee or subcommittee level."
Sen. Mark Warner, a Democrat from Virginia, made a similar request on Thursday, before news of the scandal came out. The vice chairman of the Senate Intelligence Committee told Bloomberg "the CEOs owe an obligation."
"It's time for Mr. Zuckerberg and the other CEOs to testify before Congress. The American people deserve answers about social media manipulation in the 2016 election," Warner said in a tweet.
Cambridge Analytica released a statement Monday morning calling the claims against its company "false allegations."
On Monday, Sen. Ron Wyden (D-Ore.), wrote a letter to Zuckerberg, asking for the CEO to explain how Facebook's data was abused by Cambridge Analytica.
"With little oversight -- and no meaningful intervention from Facebook -- Cambridge Analytica was able to use Facebook-developed and marketed tools to weaponize detailed psychological profiles against tens of millions of Americans," Wyden wrote in his letter.
Several senators have added their requests for Zuckerberg to head to Washington, DC, including Sen. Richard Blumenthal, a Democrat from Connecticut.
"Mark Zuckerberg needs to testify under oath in public before the Judiciary Committee. He owes it to the American people who ought to be deeply disappointed by the conflicting and disparate explanations that have been offered," he told reporters on Monday. Blumenthal added that Zuckerberg should be subpoenaed to appear if he won't come on his own.
Sens. John Thune (R-SD), Roger Wicker (R-Miss.) and Jerry Moran (R-Kan.) signed a joint letter on Monday as well, demanding a response from Zuckerberg by March 29.
Rep. Adam Schiff, a Democrat from California on the House Intelligence Committee, called for Cambridge Analytica, as well as Facebook and Zuckerberg, to testify to Congress.
"I think it would be beneficial to have him come testify before the appropriate oversight committees," he told The Washington Post.
The pressure isn't just coming from DC. The European Union has also launched an investigation into Cambridge Analytica and Facebook, according to a statement from Antonio Tajani, the European Parliament president.
In the UK, Damian Collins, the chair of Parliament's Digital, Culture, Media and Sport Committee, on Tuesday sent a letter to Zuckerberg to request that he make an appearance to provide "oral evidence" about Facebook's handling of user data.
"It is now time to hear from a senior Facebook executive with sufficient authority to give an accurate account of this catastrophic failure of process," Collins wrote. "Given your commitment at the start of the New Year to 'fixing' Facebook, I hope that this representative will be you."
First published March 19 at 9:28 a.m. PT.
Update, 10:35 a.m. PT: Adds a letter from Sen. Ron Wyden.
Update, 11:40 a.m. PT: Adds comment from Facebook.
Update, 12:22 p.m. Adds a response from Cambridge Analytica.
Update, 12:40 p.m. PT: Adds a comment from a spokeswoman for Grassley,
Update, March 20 at 6:38 a.m. PT: Adds new statements from Sen. Mark Warner and UK member of Parliament Damian Collins.
Update, March 20 at 7:37 a.m. PT: Adds statements from four senators.
'They need to take responsibility'
But the call for Facebook's CEO continues to rise. On Saturday, Sen. Amy Klobuchar tweeted that "Mark Zuckerberg needs to testify before the Senate Judiciary."The Minnesota Democrat added to her demand on Monday morning, telling NPR's Morning Edition that Zuckerberg needs to speak for Facebook's flaws.
"They have not come before us, they've given it to their lobbyists and their lawyers, and we think that they need to take responsibility for what's going on," Klobuchar said. "I don't know why this CEO, even though he's super famous and has made a lot of money, why he also doesn't have to come before the committee."
She pointed out that multiple CEOs have testified to Congress in the past, and said the chances of Zuckerberg appearing increase if more politicians call for it.
Responding to a request for comment, Facebook didn't address whether Zuckerberg would be willing to testify before Congress.
"We are in the process of conducting a comprehensive internal and external review as we work to determine the accuracy of the claims that the Facebook data in question still exists," said Paul Grewal, Facebook vice president and deputy general counsel. "That is where our focus lies as we remain committed to vigorously enforcing our policies to protect people's information."
Klobuchar isn't the only one speaking out. The Federal Election Commission on Monday also called for Zuckerberg, as well as Larry Page, CEO of Google parent Alphabet, and Twitter CEO Jack Dorsey to testify at a public hearing set for June 27.
"Your perspective would be of great value to the Commission and to the nation," Ellen Weintraub, the FEC's vice chair, said in her letter to Zuckerberg.
In a joint letter with Klobuchar, Sen. John Kennedy, a Republican from Louisiana, has also called for Zuckerberg to testify before Congress, and asked Senate Judiciary Committee Chairman Sen. Chuck Grassley, a Republican from Iowa, to call for a hearing.
"While this Committee's Subcommittee on Crime and Terrorism convened a hearing with witnesses representing Facebook, Twitter, and Google in October of 2017, we have yet to hear from the leaders of these companies directly," Kennedy and Klobuchar wrote.
The letter also asks that the CEOs from Google and Twitter testify.
In response to the letter from Klobuchar and Kennedy, a spokeswoman for Grassley said the senator's taking the request under consideration: "At this point, no decision has been made on whether to hold such a hearing or whether it would occur at the full committee or subcommittee level."
Sen. Mark Warner, a Democrat from Virginia, made a similar request on Thursday, before news of the scandal came out. The vice chairman of the Senate Intelligence Committee told Bloomberg "the CEOs owe an obligation."
Weaponizing psychological profiles
On Tuesday, Warner formally addressed Zuckerberg, writing that Facebook owes the public an explanation."It's time for Mr. Zuckerberg and the other CEOs to testify before Congress. The American people deserve answers about social media manipulation in the 2016 election," Warner said in a tweet.
Cambridge Analytica released a statement Monday morning calling the claims against its company "false allegations."
On Monday, Sen. Ron Wyden (D-Ore.), wrote a letter to Zuckerberg, asking for the CEO to explain how Facebook's data was abused by Cambridge Analytica.
"With little oversight -- and no meaningful intervention from Facebook -- Cambridge Analytica was able to use Facebook-developed and marketed tools to weaponize detailed psychological profiles against tens of millions of Americans," Wyden wrote in his letter.
Several senators have added their requests for Zuckerberg to head to Washington, DC, including Sen. Richard Blumenthal, a Democrat from Connecticut.
"Mark Zuckerberg needs to testify under oath in public before the Judiciary Committee. He owes it to the American people who ought to be deeply disappointed by the conflicting and disparate explanations that have been offered," he told reporters on Monday. Blumenthal added that Zuckerberg should be subpoenaed to appear if he won't come on his own.
Sens. John Thune (R-SD), Roger Wicker (R-Miss.) and Jerry Moran (R-Kan.) signed a joint letter on Monday as well, demanding a response from Zuckerberg by March 29.
Rep. Adam Schiff, a Democrat from California on the House Intelligence Committee, called for Cambridge Analytica, as well as Facebook and Zuckerberg, to testify to Congress.
"I think it would be beneficial to have him come testify before the appropriate oversight committees," he told The Washington Post.
The pressure isn't just coming from DC. The European Union has also launched an investigation into Cambridge Analytica and Facebook, according to a statement from Antonio Tajani, the European Parliament president.
In the UK, Damian Collins, the chair of Parliament's Digital, Culture, Media and Sport Committee, on Tuesday sent a letter to Zuckerberg to request that he make an appearance to provide "oral evidence" about Facebook's handling of user data.
"It is now time to hear from a senior Facebook executive with sufficient authority to give an accurate account of this catastrophic failure of process," Collins wrote. "Given your commitment at the start of the New Year to 'fixing' Facebook, I hope that this representative will be you."
First published March 19 at 9:28 a.m. PT.
Update, 10:35 a.m. PT: Adds a letter from Sen. Ron Wyden.
Update, 11:40 a.m. PT: Adds comment from Facebook.
Update, 12:22 p.m. Adds a response from Cambridge Analytica.
Update, 12:40 p.m. PT: Adds a comment from a spokeswoman for Grassley,
Update, March 20 at 6:38 a.m. PT: Adds new statements from Sen. Mark Warner and UK member of Parliament Damian Collins.
Update, March 20 at 7:37 a.m. PT: Adds statements from four senators.
How the Cambridge Analytica story became a crisis
By
The longer you consider Facebook’s Cambridge Analytica scandal,
the stranger it seems. The basic details of the story, in which a
researcher improperly gave away data to the company that became Donald
Trump’s data operations team in 2016, have been known for two years.
The effectiveness of Cambridge Analytica’s psychographic targeting,
which attempted to influence voters by mapping out their Facebook Likes,
is highly suspect and likely overstated. The eye-popping number of
Facebook profiles said to be involved — 50 million — may turn out to be marketing hype for a company that excels at it.
And yet, revelations from this weekend’s stories in The New York Times and The Guardian continue to batter the company. A bipartisan group of US senators
called upon CEO Mark Zuckerberg to testify about how Cambridge
Analytica came into possession of so much user data. British authorities
promised to investigate the incident as well. On Monday, the company’s stock fell more than 10 percent from the all-time high it set on February 1st. On Tuesday morning, Bloomberg reported that the Federal Trade Commission is investigating the company over its use of personal data.
Cambridge Analytica’s data misuse may ultimately have had
little effect in influencing elections here or abroad. But the way
Cambridge Analytica obtained its data, and reports that the company held
on to the data, despite telling Facebook it had deleted it, have
renewed concerns about data privacy on the world’s biggest social
network. After learning that data from a researcher’s personality quiz
app had improperly been shared with Cambridge Analytica, Facebook took
the company at its word that it had purged user profiles: “That to me
was the most astonishing thing,” former employee Christopher Wylie told The Guardian.
“They waited two years and did absolutely nothing to check that the
data was deleted. All they asked me to do was tick a box on a form and
post it back.”
Facebook’s lack of enforcement in the face of bad actors,
coupled with misuse of its platform on a grand scale, have drawn
outrage around the globe. And while Cambridge Analytica is among the
most prominent examples to date of how Facebook can be misused, it
belongs to a long and growing list. In March alone:
- Facebook, Instagram, and WhatsApp were forced to shut down temporarily in Sri Lanka after inflammatory messages posted to the service incited mob violence against the country’s Muslim minority.
- United Nations investigators blamed Facebook for spreading hate speech that incited violence against the Rohingya minority in Myanmar.
- The Facebook search bar briefly auto-filled its search bar with suggestions for porn.
- A far-right Italian politician credited Facebook with his party’s surprising electoral victory, after reports that Russian state media used the platform to promote stories suggesting Italy faced an immigration crisis.
- Facebook banned far-right group Britain First, which had more than 2 million followers, for inciting violence against minorities.
- Facebook’s chief security officer is quitting after reportedly arguing too forcefully that the company should investigate and disclose Russian activity on the platform.
Taken together, these incidents paint a picture of a
platform on which crises are developing faster than its minders can
address them. A year and a half after Donald Trump’s election sparked a
cultural reckoning over social media, Facebook has struggled to contain
the fallout. A series of steps taken to remove terrorist propaganda more
quickly, and tamp down on the spread of fake news, have produced some
encouraging results. But those steps have done little to stop the daily
drumbeat of articles about ways in which Facebook is misused around the
world, often with disturbing results.
Facebook has typically been quick to apologize
when confronted with misuse of the platform, promising it will do
better in the future. But the company has taken a defensive posture over
the Cambridge Analytica stories, saying that the issue was resolved
years ago. But while the company plays defense, a growing number of
lawmakers and regulators around the world are promising to investigate
the company. This scandal really is different.
The company said Monday that it had hired a forensics team to investigate the company,
with Cambridge Analytica’s permission. But before Facebook could
complete its audit, the United Kingdom Information Commissioner’s Office
ordered that they stop while the office pursues a warrant to mount its own investigation.
It was a dramatic real-world standoff in a case that has
until now played out mostly online. And yet the standoff also had an
undeniable symbolism: Facebook, attempting to fix its mistakes by
itself, found itself at last restrained by the government. As Tuesday
began, neither Zuckerberg nor his chief operating officer, Sheryl
Sandberg, had made a statement about the Cambridge Analytica
revelations. In the brutal months since the election, Facebook has
typically been quick to apologize. But after an overwhelming March, it
appears that its top executives are speechless.
The Key to Understanding Facebook's Current Crisis
Facebook FB -3.34% is scrambling to placate users, advertisers and investors following a string of damaging news reports about the misuse of user data.
Last week, Facebook confirmed that Cambridge Analytica, a data
firm hired by President Trump’s campaign, had violated the company’s
policies when it purchased the data of 50 million users from a
researcher who accessed it in 2013. The stock plunged, lawmakers began
demanding answers and users threatened to quit the social network
altogether.
Cambridge Analytica says it’s launching its own investigation to see if the firm engaged in wrongdoing, and in a Facebook post, CEO Mark Zuckerberg acknowledged that Facebook knew about the policy violation in 2015. Facebook asked the data firm and the researcher to certify that the information had been deleted, but it didn’t notify users at the time.
Now, Facebook is facing a wave of backlash for not doing more to prevent information from being abused. Although the trove of information used by Cambridge Analytica was downloaded before 2015, the year Facebook implemented stricter data policies, it has exposed an ugly truth for the social network: user information that was accessed during the company’s earlier years can still be abused today.
In the video above, we take a look at how Facebook’s lax policies of the past regarding the sharing of data paved the way for the company’s current crisis.
Cambridge Analytica says it’s launching its own investigation to see if the firm engaged in wrongdoing, and in a Facebook post, CEO Mark Zuckerberg acknowledged that Facebook knew about the policy violation in 2015. Facebook asked the data firm and the researcher to certify that the information had been deleted, but it didn’t notify users at the time.
Now, Facebook is facing a wave of backlash for not doing more to prevent information from being abused. Although the trove of information used by Cambridge Analytica was downloaded before 2015, the year Facebook implemented stricter data policies, it has exposed an ugly truth for the social network: user information that was accessed during the company’s earlier years can still be abused today.
In the video above, we take a look at how Facebook’s lax policies of the past regarding the sharing of data paved the way for the company’s current crisis.
After Days of Silence, Facebook’s Mark Zuckerberg Admits to ‘Mistakes’ With User Data
CEO pledges to investigate outsiders’ handling of user information
Facebook Inc. Chief Executive Mark Zuckerberg broke his silence
five days into a growing uproar about how outsiders handle Facebook’s
user data, admitting mistakes and pledging an investigation but failing
to calm some who thought he should have gone further in his remarks.
The growing controversy has shaken the social-media company, knocking its stock price lower and prompting renewed calls for governments to better regulate technology businesses that hold enormous quantities of information about their users.
Breaking five days of silence, Facebook CEO Mark Zuckerberg admitted mistakes and outlined steps to protect user data in light of a privacy scandal involving a Trump-connected data-mining firm.
Zuckerberg said Wednesday that Facebook has a "responsibility" to protect its users' data and if it fails, "we don't deserve to serve you."
But Zuckerberg stopped short of apologizing.
And he wrote "what happened" instead of "what we did," leaving Facebook one step removed from responsibility.
Zuckerberg and Facebook's No. 2 executive, Sheryl Sandberg, have been quiet since news broke Friday that Cambridge Analytica may have used data improperly obtained from roughly 50 million Facebook users to try to sway elections.
Facebook shares have dropped some 8 percent since the revelations were first published, raising questions about whether social media sites are violating users' privacy.
Even before the scandal broke, Facebook has already taken the most important steps to prevent a recurrence, Zuckerberg said. For example, in 2014, it reduced access outside apps had to user data. However, some of the measures didn't take effect until a year later, allowing Cambridge to access the data in the intervening months.
Zuckerberg acknowledges that there is more to do.
In a Facebook post on Wednesday, Zuckerberg said it will ban developers who don't agree to an audit. An app's developer will no longer have access to data from people who haven't used that app in three months. Data will also be generally limited to user names, profile photos and email, unless the developer signs a contract with Facebook and gets user approval.
In a separate post, Facebook said it will inform people whose data was misused by apps. And in the future, when it bans an app for misusing people's data, Facebook promises to tell everyone who used it.
Facebook first learned of this breach of privacy more than two years ago, but hadn't mentioned it publicly until Friday.
The company it is also "building a way" for people to know if their data was accessed by "This Is Your Digital Life," though there is no way to do this at the moment. The app is the psychological profiling quiz that researcher Aleksandr Kogan created and paid about 270,000 people to take part in. Cambridge Analytica later obtained data from the app for about 50 million Facebook users, because it also vacuumed up data on people's friends.
Facebook didn't say how it would inform users if their data was compromised. But it could look similar to the page it set up for users to see if they liked or followed accounts set up by the Russian troll farm Internet Research Agency, accused of meddling with the 2016 presidential elections. This tool, however, doesn't show users if they merely saw —or even "liked"— posts from those pages.
Earlier Wednesday, Kogan described himself as a scapegoat and said he had no idea his work would be used in Donald Trump's 2016 presidential campaign.
Kogan, a psychology researcher at Cambridge University, told the BBC that both Facebook and Cambridge Analytica have tried to place the blame on him for violating the social media platform's terms of service, even though Cambridge Analytica ensured him that everything he did was legal.
"Honestly, we thought we were acting perfectly appropriately," Kogan said. "We thought we were doing something that was really normal."
Cambridge has shifted the blame to Kogan, which the firm described as a contractor.
Kogan said Cambridge Analytica approached him to gather Facebook data and provided the legal advice that this was "appropriate."
"One of the great mistakes I did here was I just didn't ask enough questions," he said. "I had never done a commercial project; I didn't really have any reason to doubt their sincerity. That's certainly something I strongly regret now."
He said the firm paid some $800,000 for the work, but it went to participants in the survey.
"My motivation was to get a data set I could do research on; I have never profited from this in any way personally," he said.
Authorities in Britain and the United States are investigating.
Sandy Parakilas, who worked in data protection for Facebook in 2011 and 2012, told a U.K. parliamentary committee Wednesday that the company was vigilant about its network security but lax when it came to protecting users' data.
He said personal data including email addresses and in some cases private messages was allowed to leave Facebook servers with no real controls on how the data was used after that.
"The real challenge here is that Facebook was allowing developers to access the data of people who hadn't explicitly authorized that," he said, adding that the company had "lost sight" of what developers did with the data.
Meanwhile, the top prosecutors in Massachusetts and New York have sent a letter to Facebook demanding the social media giant protect its users' private information.
Massachusetts Attorney General Maura Healey and New York Attorney General Eric Schneiderman launched a joint investigation Saturday after reports that British data analysis firm Cambridge Analytica captured information from 50 million Facebook users without their consent.
Healey said residents in her state "deserve answers immediately," from Facebook and Cambridge Analytica about what data was shared and how it was allowed to happen. Her office said it has been in touch with Facebook about the investigation.
Schneiderman said that if the company violated New York law "we will hold them accountable."
___
Danica Kirka and Gregory Katz reported from London.
The growing controversy has shaken the social-media company, knocking its stock price lower and prompting renewed calls for governments to better regulate technology businesses that hold enormous quantities of information about their users.
Facebook CEO Mark Zuckerberg admits mistakes, pledges fixes after data scandal
By ARBARA ORTUTAY, DANICA KIRKA and GREGORY KATZ
Breaking five days of silence, Facebook CEO Mark Zuckerberg admitted mistakes and outlined steps to protect user data in light of a privacy scandal involving a Trump-connected data-mining firm.
Zuckerberg said Wednesday that Facebook has a "responsibility" to protect its users' data and if it fails, "we don't deserve to serve you."
Advertisement
And he wrote "what happened" instead of "what we did," leaving Facebook one step removed from responsibility.
Zuckerberg and Facebook's No. 2 executive, Sheryl Sandberg, have been quiet since news broke Friday that Cambridge Analytica may have used data improperly obtained from roughly 50 million Facebook users to try to sway elections.
Facebook shares have dropped some 8 percent since the revelations were first published, raising questions about whether social media sites are violating users' privacy.
Even before the scandal broke, Facebook has already taken the most important steps to prevent a recurrence, Zuckerberg said. For example, in 2014, it reduced access outside apps had to user data. However, some of the measures didn't take effect until a year later, allowing Cambridge to access the data in the intervening months.
Zuckerberg acknowledges that there is more to do.
In a Facebook post on Wednesday, Zuckerberg said it will ban developers who don't agree to an audit. An app's developer will no longer have access to data from people who haven't used that app in three months. Data will also be generally limited to user names, profile photos and email, unless the developer signs a contract with Facebook and gets user approval.
In a separate post, Facebook said it will inform people whose data was misused by apps. And in the future, when it bans an app for misusing people's data, Facebook promises to tell everyone who used it.
Facebook first learned of this breach of privacy more than two years ago, but hadn't mentioned it publicly until Friday.
The company it is also "building a way" for people to know if their data was accessed by "This Is Your Digital Life," though there is no way to do this at the moment. The app is the psychological profiling quiz that researcher Aleksandr Kogan created and paid about 270,000 people to take part in. Cambridge Analytica later obtained data from the app for about 50 million Facebook users, because it also vacuumed up data on people's friends.
Facebook didn't say how it would inform users if their data was compromised. But it could look similar to the page it set up for users to see if they liked or followed accounts set up by the Russian troll farm Internet Research Agency, accused of meddling with the 2016 presidential elections. This tool, however, doesn't show users if they merely saw —or even "liked"— posts from those pages.
Earlier Wednesday, Kogan described himself as a scapegoat and said he had no idea his work would be used in Donald Trump's 2016 presidential campaign.
Kogan, a psychology researcher at Cambridge University, told the BBC that both Facebook and Cambridge Analytica have tried to place the blame on him for violating the social media platform's terms of service, even though Cambridge Analytica ensured him that everything he did was legal.
"Honestly, we thought we were acting perfectly appropriately," Kogan said. "We thought we were doing something that was really normal."
Cambridge has shifted the blame to Kogan, which the firm described as a contractor.
Kogan said Cambridge Analytica approached him to gather Facebook data and provided the legal advice that this was "appropriate."
"One of the great mistakes I did here was I just didn't ask enough questions," he said. "I had never done a commercial project; I didn't really have any reason to doubt their sincerity. That's certainly something I strongly regret now."
He said the firm paid some $800,000 for the work, but it went to participants in the survey.
"My motivation was to get a data set I could do research on; I have never profited from this in any way personally," he said.
Authorities in Britain and the United States are investigating.
Sandy Parakilas, who worked in data protection for Facebook in 2011 and 2012, told a U.K. parliamentary committee Wednesday that the company was vigilant about its network security but lax when it came to protecting users' data.
He said personal data including email addresses and in some cases private messages was allowed to leave Facebook servers with no real controls on how the data was used after that.
"The real challenge here is that Facebook was allowing developers to access the data of people who hadn't explicitly authorized that," he said, adding that the company had "lost sight" of what developers did with the data.
Meanwhile, the top prosecutors in Massachusetts and New York have sent a letter to Facebook demanding the social media giant protect its users' private information.
Massachusetts Attorney General Maura Healey and New York Attorney General Eric Schneiderman launched a joint investigation Saturday after reports that British data analysis firm Cambridge Analytica captured information from 50 million Facebook users without their consent.
Healey said residents in her state "deserve answers immediately," from Facebook and Cambridge Analytica about what data was shared and how it was allowed to happen. Her office said it has been in touch with Facebook about the investigation.
Schneiderman said that if the company violated New York law "we will hold them accountable."
___
Danica Kirka and Gregory Katz reported from London.
Kansas father facing deportation reunites with family
DRUDGE REPORT 2018® Reports facebook twitter youtube are all blocking Conservatives News Feed on all social media sites has been block
List OF 10 Violation Laws that Youtube Twitter and Facebook has Broken For the last 10 years
1) Youtube Twitter and Facebook Is In Violation Freedom OF Press
2) Youtube Twitter and Facebook Is In Violation Freedom OF Religion
3) Youtube Twitter and Facebook Is Not Blocking isis terrorist Groups
4) Youtube Twitter and Facebook Is Facing Harassment Charges
5) Youtube Twitter and Facebook Is In Violation Not Blocking Scammers
6) Youtube Twitter and Facebook Is In Violation OF Major Constitutional Rights
7) Youtube Twitter and Facebook Is In Violation Targeting Tea Party Patriots Conservative Tea Party Groups
8)Youtube Twitter and Facebook Is In Violation OF Cyber-Bulling
9) Youtube Twitter and Facebook Is In Violation OF Speech Messages Have Been Block OnYoutube Twitter Facebook
10) Youtube Twitter and Facebook Is In Violation OF Legal And Law Enforcement Laws People Posting Death Threat Messages On Facebook Without Being Blocked
Facebook Workers: We Routinely Suppressed Conservative News
Here is how you can circumvent Facebook’s block on Jihad Watch
That means us, friends, however unjustified that lack of trust may be, and others whom the hard-Left censors at Facebook deem unworthy of your attention. Foes of jihad terror are on their block list, but here is a way you can adjust your settings so that you still get the news we report here:
“Facebook’s Changing Your Newsfeed. Here’s How To Make Sure You Still See Posts By Your Favorite Sites.,” by James Barrett, Daily Wire, January 26, 2018 (thanks to the Geller Report):
Facebook recently announced that it will be making major changes to its newsfeed that will significantly impact what users see. The emphasis, CEO Mark Zuckerberg explained, will be on posts from users’ friends and family, as well as what Facebook calls “trusted sources.”
Those “trusted sources,” however, are not necessarily going to be the same pages and news sites that users follow; rather, they are sources that Facebook designates as “trusted” through what it says will be rankings produced by “a diverse and representative” sample of Facebook users (see full post below). Which sources are “trusted sources” and which are not, is unclear. Sources not deemed “trusted” — even those you choose to follow — will get buried or de-emphasized in your newsfeed.
But there’s a way to make sure that Facebook does not prevent you from seeing posts by your favorite sites. Below are the instructions for how to update your Facebook settings so that your newsfeed prioritizes posts by sites you follow, like The Daily Wire, rather than letting the platform determine what you get to see.
1. On your Facebook homepage, click the drop-down arrow on the top right of the page and select “News Feed Preferences” (usually found near the bottom of the listed options).
2. Select “Prioritize who to see first” (usually the first option listed).
3. Change the view options to show “Pages only,” so it’s easier to find the pages for the sites you prefer to see in your newsfeed. Then simply select the pages you wish to see first in your newsfeed.
Another way to protect your newsfeed: Go to the Facebook page of the site you want to follow, click the “Following” drop-down arrow, and check the “See First” option “In Your News Feed.”
After you’ve protected your newsfeed to make sure you’re still seeing posts from your favorite sources, the other extremely important thing you can do to make sure those sources don’t get buried by Facebook is share posts with friends and family.
Here is an excerpt of the message posted by Zuckerberg explaining the platform’s new emphasis on promoting “trusted” news sources in order to protect against “sensationalism, misinformation and polarization” (full post below):
There’s too much sensationalism, misinformation and polarization in the world today. Social media enables people to spread information faster than ever before, and if we don’t specifically tackle these problems, then we end up amplifying them. That’s why it’s important that News Feed promotes high quality news that helps build a sense of common ground.
The hard question we’ve struggled with is how to decide what news sources are broadly trusted in a world with so much division. We could try to make that decision ourselves, but that’s not something we’re comfortable with. We considered asking outside experts, which would take the decision out of our hands but would likely not solve the objectivity problem. Or we could ask you — the community — and have your feedback determine the ranking.
We decided that having the community determine which sources are broadly trusted would be most objective.
Here’s how this will work. As part of our ongoing quality surveys, we will now ask people whether they’re familiar with a news source and, if so, whether they trust that source. The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don’t follow them directly. (We eliminate from the sample those who aren’t familiar with a source, so the output is a ratio of those who trust the source to those who are familiar with it.)
This update will not change the amount of news you see on Facebook. It will only shift the balance of news you see towards sources that are determined to be trusted by the community.
Facebook Does Not Believe In
U.S. Legal Process Requirements
Facebook Does Not disclose account records solely in accordance with our terms of
service and applicable law, including the federal Stored Communications
Act ("SCA"), 18 U.S.C. Sections 2701-2712. Under U.S. law:
- A valid subpoena issued in connection with an official criminal investigation is required to compel the disclosure of basic subscriber records (defined in 18 U.S.C. Section 2703(c)(2)), which may include: name, length of service, credit card information, email address(es), and a recent login/logout IP address(es), if available.
- A court order issued under 18 U.S.C. Section 2703(d) is required to compel the disclosure of certain records or other information pertaining to the account, not including contents of communications, which may include message headers and IP addresses, in addition to the basic subscriber records identified above.
- A search warrant issued under the procedures described in the Federal Rules of Criminal Procedure or equivalent state warrant procedures upon a showing of probable cause is required to compel the disclosure of the stored contents of any account, which may include messages, photos, videos, timeline posts, and location information.
- We interpret the national security letter provision as applied to Facebook to require the production of only 2 categories of information: name and length of service.
Facebook Message
The email address provided is not a government issued or law enforcement email address. Please try again with a valid email address.
Email dawns@lapd.gov
Enter your email address to receive a unique link to the Law Enforcement Online Request System. The link will give you access to the system for one hour.
_________________________________
German court rules Facebook data use, privacy settings illegal
By Ali Breland - 02/12/18 01:15 PM EST 10
A regional court in Germany has found Facebook’s default privacy settings and use of personal data it collects from users to be in violation of consumer protection laws. The Berlin court found that Facebook did not provide users enough information for them to understand how their data is being collected and that any agreements users signed did not constitute meaningful consent. VZBV, the German privacy advocacy group that filed the suit, argued that data collection agreements that Facebook users are automatically opted into don’t give users enough notice about what they’re agreeing to.
“Facebook hides default settings that are not privacy-friendly in its privacy center and does not provide sufficient information about it when users register,” said Heiko Dünkel, a litigation policy officer at the VZBV. “This does not meet the requirement for informed consent.” The court ruled that several Facebook default data sharing settings did not count as consent from the user. It also found clauses in Facebook’s terms of service to be invalid, including its policy of requiring users to use their “authentic names” on the website. Facebook told The Guardian that it intended to appeal the decision. “We are working hard to ensure that our guidelines are clear and easy to understand and that the services offered by Facebook are in full accordance with the law,” the company said in a statement. The social media company is also dealing with scrutiny from the national government in Germany and the European Union over its data collection and privacy policies. Facebook had previously said that it will be making significant changes to its privacy settings to conform with the EU's new General Data Protection Regulation, laws covering data use across the EU. “We’re rolling out a new privacy center, globally, that will put the core privacy settings for Facebook in one place and make it much easier for people to manage their data,” Facebook COO Sheryl Sandberg said of the changes in January.
Where can I find my settings? Computer Help https://www.facebook.com/help/www/166986580029611?helpref=platform_switcher#" label="Mobile Help" role="button" tabindex="0">Mobile Helpadditional tabs menu
To find your settings, click in the top right corner of your screen and select Settings. From here, you can select the option in the left sidebar that contains the settings you want to adjust:
General: Edit the basics like your name, email or passwordSecurity and Login: Turn on alerts and approvals to keep your account secure
Privacy: Adjust who can see your stuff and who can look you up
Timeline and Tagging: Set who can see your timeline and how to manage photo tagging
Blocking: Manage who and what you block Language: Select the language that you want to use for Facebook
These new allegations emerged after Gizmodo last week revealed details about the inner workings of Facebook’s trending news team—a small group of young journalists, primarily educated at Ivy League or private East Coast universities, who curate the “trending” module on the upper-right-hand corner of the site. As we reported last week, curatorshave access to a ranked list of trending topics surfaced by Facebook’s algorithm, which prioritizes the stories that should be shown to Facebook users in the trending section. The curators write headlines and summaries of each topic, and include links to news sites. The section, which launched in 2014, constitutes some of the most powerful real estate on the internet and helps dictate what news Facebook’s users—167 million in the US alone—are reading at any given moment. “I believe it had a chilling effect on conservative news.” “Depending on who was on shift, things would be blacklisted or trending,” said the former curator. This individual asked to remain anonymous, citing fear of retribution from the company. The former curator is politically conservative, one of a very small handful of curators with such views on the trending team. “I’d come on shift and I’d discover that CPAC or Mitt Romney or Glenn Beck or popular conservative topics wouldn’t be trending because either the curator didn’t recognize the news topic or it was like they had a bias against Ted Cruz.”
Want to Know What Facebook Really Thinks of Journalists? Here's What Happened When It Hired Some.
Depending on whom you ask, Facebook is either the savior or destroyer of journalism in our time. An …
Read more
Another former curator agreed that the operation had an aversion to right-wing news sources. “It was absolutely bias. We were doing it subjectively. It just depends on who the curator is and what time of day it is,” said the former curator. “Every once in awhile a Red State or conservative news source would have a story. But we would have to go and find the same story from a more neutral outlet that wasn’t as biased.” Stories covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories. former curators interviewed by Gizmodo denied consciously suppressing conservative news, and we were unable to determine if left-wing news topics or sources were similarly suppressed. The conservative curator described the omissions as a function of his colleagues’ judgements; there is no evidence that Facebook management mandated or was even aware of any political bias at work. Managers on the trending news team did, however, explicitly instruct curators to artificially manipulate the trending module in a different way: When users weren’t reading stories that management viewed as important, several former workers said, curators were told to put them in the trending news feed anyway. Several former curators described using something called an “injection tool” to push topics into the trending module that weren’t organically being shared or discussed enough to warrant inclusion—putting the headlines in front of thousands of readers rather than allowing stories to surface on their own. In some cases, after a topic was injected, it actually became the number one trending news topic on Facebook. “We were told that if we saw something, a news story that was on the front page of these ten sites, like CNN, the New York Times, and BBC, then we could inject the topic,” said one former curator. “If it looked like it had enough news sites covering the story, we could inject it—even if it wasn’t naturally trending.” Sometimes, breaking news would be injected because it wasn’t attaining critical mass on Facebook quickly enough to be deemed “trending” by the algorithm. Former curators cited the disappearance of Malaysia Airlines flight MH370 and the Charlie Hebdo attacks in Paris as two instances in which non-trending stories were forced into the module. Facebook has "https://www.theguardian.com struggled to compete with Twitter when it comes to delivering real-time news to users; the injection tool may have been designed to artificially correct for that deficiency in the network. “We would get yelled at if it was all over Twitter and not on Facebook,” one former curator said.
In other instances, curators would inject a story—even if it wasn’t being widely discussed on Facebook—because it was deemed important for making the network look like a place where people talked about hard news. “People stopped caring about Syria,” one former curator said. “[And] if it wasn’t trending on Facebook, it would make Facebook look bad.” That same curator said the Black Lives Matter movement was also injected into Facebook’s trending news module. “Facebook got a lot of pressure about not having a trending topic for Black Lives Matter,” the individual said. “They realized it was a problem, and they boosted it in the ordering. They gave it preference over other topics. When we injected it, everyone started saying, ‘Yeah, now I’m seeing it as number one’.” This particular injection is especially noteworthy because the #BlackLivesMatter movement originated on Facebook, and the ensuing media coverage of the movement often noted its powerful social media presence.(In February, CEO Mark Zuckerberg expressed his support for the movement in an internal memo chastising Facebook employees for defacing Black Lives Matter slogans on the company’s internal “signature wall.”) When stories about Facebook itself would trend organically on the network, news curators used less discretion—they were told not to include these stories at all. “When it was a story about the company, we were told not to touch it,” said one former curator. “It had to be cleared through several channels, even if it was being shared quite a bit. We were told that we should not be putting it on the trending tool.”
(The curators interviewed for this story worked for Facebook across a timespan ranging from mid-2014 to December 2015.)
“We were always cautious about covering Facebook,” said another former curator. “We would always wait to get second level approval before trending something to Facebook. Usually we had the authority to trend anything on our own [but] if it was something involving Facebook, the copy editor would call their manager, and that manager might even call their manager before approving a topic involving Facebook.”
Gizmodo reached out to Facebook for comment about each of these specific claims email and phone, but did not receive a response.Several former curators said that as the trending news algorithm improved, there were fewer instances of stories being injected. They also said that the trending news process was constantly being changed, so there’s no way to know exactly how the module is run now. But the revelations undermine any presumption of Facebook as a neutral pipeline for news, or the trending news module as an algorithmically-driven list of what people are actually talking about. Rather, Facebook’s efforts to play the news game reveal the company to be much like the news outlets it is rapidly driving toward irrelevancy: a select group of professionals with vaguely center-left sensibilities. It just happens to be one that poses as a neutral reflection of the vox populi, has the power to influence what billions of users see, and openly discusses whether it should use that power to influence presidential elections. “It wasn’t trending news at all,” said the former curator who logged conservative news omissions. “It was an opinion.”[Disclosure: Facebook has launched a program that pays publishers, including theNew York Times and Buzzfeed, to produce videos for its Facebook Live tool. Gawker Media, Gizmodo’s parent company, recently joined that program.]Several hours after this report was published, Gizmodo editors started seeing it as a topic in Facebook’s trending section. Gizmodo’s video was posted under the topic but the “Top Posts” were links to RedState.com and the Faith and Freedom Coalition.
Is Facebook Censoring Conservative News And How Social Media Controls What We See ?
Gizmodo’s Michael Nunez is out today with a sensational story
in which former Facebook employees claim they regularly censored the
platform’s “trending” news section to eliminate stories about
conservative topics that were organically trending, blacklisted
certain news outlets from appearing and artificially “injected” stories
they felt were important but that the site’s users were not discussing
or clicking on. This comes a month after Nunez published
a leaked internal Facebook poll that asked “What responsibility does
Facebook have to help prevent President Trump in 2017?” In short, as the
curtain has been lifted on Facebook’s magical trending algorithm, the
mythical unbiased algorithm powering what users see on the site is seen
to be less machine and more biased human curator. Yet, given Facebook’s
phenomenal reach across the world and the role it increasingly plays as
primary news gateway for more and more people, the notion that it is
systematically curating what its users see in an unalgorithmic and
partisan way raises alarm bells on the future of how we access and
consume information.
Ryan Merkley, CEO of Creative Commons wrote
in Wired last month that “If the Web has achieved anything, it’s that
it’s eliminated the need for gatekeepers, and allowed creators—all of
us—to engage directly without intermediaries, and to be accountable
directly to each other.” Yet, such a rosily optimistic view of the web’s
impact on society seems to ignore the mounting evidence that the web is
in fact merely coalescing around a new set of gatekeepers. As Jack
Mirkinson wrote
for Salon earlier this month, “the internet, that supposed smasher of
gates and leveler of playing fields, has coalesced around a mere handful
of mega-giants in the space of just a couple of decades. The gates
didn’t really come down. The identities of the gatekeepers just changed.
Google, Facebook, Apple, Amazon: How many people can really say that
some portion of every day of their lives isn’t mediated by at least one
of these companies? ... It seems that, at least for the moment, we are
destined to live in the world that they create—and that includes
everyone in the media business.”Far from democratizing how we access the world’s information, the web has in fact narrowed those information sources. Much as large national chains and globalization have replaced the local mom-and-pop shop with the megastore and local craftsmanship with assembly line production, the internet is centralizing information access from a myriad websites and local newspapers and radio/television shows to single behemoth social platforms that wield universal global control over what we consume.
Indeed, social media platforms appear to increasingly view themselves no longer as neural publishing platforms but rather as active mediators and curators of what we see. This extends even to new services like messaging. David Marcus, Facebook’s Vice President of Messaging recently told Wired: “Unlike email where there is no one safeguarding the quality and the quantity of the stuff you receive, we’re here in the middle to protect the quality and integrity of your messages and to ensure that you’re not going to get a lot of stuff you don’t want.” In short, Facebook wants to act as an intelligent filter onto what we see of the world. The problem is that any filter by design must emphasize some content and views at the expense of others.
In the case of Facebook, the new revelations are most concerning because they go to the very heart of how these new social platforms shape what we understand about the world. It is one thing for a platform to announce it will delete posts that promote terrorism or that threaten another user with bodily harm, but to silently and systematically filter what users see through a distinct partisan lens, especially with regards to news reporting, adds a frightening dimension to just how much power a handful of Silicon Valley companies now wield over what we see online.
Ben Rhodes, deputy national security advisor for strategic communication at the White House recently raised eyebrows when he remarked on the Internet’s impact on news reporting by saying “All these newspapers used to have foreign bureaus. Now they don’t. They call us to explain to them what’s happening in Moscow and Cairo. Most of the outlets are reporting on world events from Washington. The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns. That’s a sea change. They literally know nothing.” In the interview he went on to claim that the White House is able to use social media to fill that information gap, putting its own talking points and interpretations out on social media which he claims are then mindlessly parroted by the media. What happens when Facebook itself goes further and helps promote some of these viewpoints to its users while censoring others?
The notion that a social media platform would systematically censor particular viewpoints or news has unique import in a presidential election year. As The Hill put it, “Facebook is a key outreach, recruiting and advertising tool for presidential candidates, and it is a primary distribution hub for the political news media. It is also where much of the political debate between voters is taking place,” accounting for over 650 million interactions regarding political candidates in a single month this year. The notion that Facebook might be systematically altering what its users see to promote particular views is troubling at best.
LLC 501C- 4 UCC 1-308.ALL RIGHTS RESERVED WITHOUT PREJUDICE
Content
and Programming Copyright 2018 By Patcnews The Patriot Conservative
News Tea Party Network © LLC UCC 1-308.ALL RIGHTS RESERVED WITHOUT
PREJUDICE All copyrights reserved By Patcnews The Patriot Conservative
News Tea Party Network Copyright 2018 CQ-Roll Call, Inc. All materials
herein are protected by United States copyright law and may not be
reproduced, distributed, transmitted, displayed, published or broadcast
without the prior written permission of CQ-Roll Call. You may not alter
or remove any trademark, copyright or other notice from copies of the
content. © All Copyrights Reserved By Patcnews The Patriot Conservative
News Tea Party Network