Powered By Blogger

Wednesday, October 31, 2018

( Facebook Future With A Diminished iphone News Feed App ) Patcnews October 31, 2018 The Patriot Conservative News Tea Party Network Reports Facebook Future With A Diminished iphone News Feed App © All Copyrights Reserved By Patcnews









Facebook's new petition feature could be its next battlefield

Sheryl Sandberg admits Facebook needs to 
‘do better’ to protect people and Stop Attacking and Targeting Conservative News Tea Party Groups in 2019
All Trump Supporters Are Also Part OF The Tea Party Groups Facebook Does Not Say That Why ???

Facebook's tipping point?

Why 2018 could be seen as a canary in the coal mine for Facebook.
Facebook has launched a new petition feature that will allow its 2.2 billion users to make political demands on the platform, an initiative that could be empowering but also bring a wave of new problems to the beleaguered company.
TechCrunch reports that the new option, called Community Actions, will allow users to add a title, a description and even tag relevant officials and government agencies as a way to help the petition go viral and prompt others to hit the "support" button.

 The New York Time Now Reports
Facebook Identifies Russia-Linked 

Misinformation Campaign Russia tries to force Facebook and Twitter to relocate servers to Russia Facebook Will create 1,000 Fake jobs in Ireland in 2019

Drivers Around Oregon are Furious About This New Rule

Drivers With No Tickets In 3 Years Are In For A Big Surprise
"Community Actions have their own discussion feed where people can leave comments, create fundraisers, and organize Facebook Events or Call Your Rep campaigns. Facebook displays the numbers of supporters behind a Community Action, but you’ll only be able to see the names of those you’re friends with or that are Pages or public figures," the tech news site reports.
WHATSAPP LIMITS USERS TO FIVE TEXT FORWARDS TO FIGHT RUMORS, FAKE NEWS
TechCrunch says that it's possible users with "fringe agendas" could harness the petitions feature to bully certain groups or otherwise wreak havoc.
"You can imagine misuses like “Crack down on [minority group]” that are offensive or even dangerous but some see as legitimate," TechCrunch reports.
The Menlo Park, Calif. company, which has struggled to stem the tide of misinformation and fake news on its platform, is reportedly hoping to keep Community Actions more narrowly focused on pushing for government action, as opposed to any random cause that a user could come up with. There are examples of the new petitions here and here.
A Facebook spokesperson gave TechCrunch the following statement on the new initiative:
"Building informed and civically engaged communities is at the core of Facebook’s mission. Every day, people come together on Facebook to advocate for causes they care about, including by contacting their elected officials, launching a fundraiser, or starting a group. Through these and other tools, we have seen people marshal support for and get results on issues that matter to them. Community Action is another way for people to advocate for changes in their communities and partner with elected officials and government agencies on solutions.”
FACEBOOK DELETES COMMENTS ON AD THAT CALLED ANTI-BREXIT LAWMAKERS 'SCUMBAGS'
Still, it remains to be seen if Facebook's team of moderators will be able to make judgment calls about whether a petition is worthwhile or being used for bullying purposes.
CLICK HERE TO GET THE FOX NEWS APP
As TechCrunch notes: "The trouble is that open access draws out the trolls and grifters seeking to fragment society. Facebook will have to assume the thorny responsibility of shepherding the product towards righteousness and defining what that even means."















 



 

 

Facebook says it will invest $300M in local Fake Phony News Stories

 

Why Sheryl Sandberg, Facebook’s ‘adult in the chatroom’, may pay the price for its failings  Oops !!!

  Sheryl Sandberg, COO of Facebook, and Jack Dorsey, CEO of Twitter, testify before a Senate intelligence committee hearing on ‘foreign influence operations and their use of social media platforms’ in September.
Photograph: Jim Lo Scalzo/EPA

Facebook’s already terrible year is ending on a new low, as Mark Zuckerberg and his beleaguered executive team battle another share price slide, this time triggered by new revelations about the company’s relaxed attitude to the privacy of its 2.2 billion customers’ data.
Shares dropped more than 7% on Tuesday after it was revealed that the company had bent its own data rules for clients including Netflix, Spotify, Amazon, Microsoft and Sony.
The latest damaging report, published by the New York Times on the back of a District of Columbia lawsuit accusing the social media giant of exposing residents to political manipulation by “failing to protect” user data during the 2016 US presidential election, will surely be disagreeable to Zuckerberg, Facebook’s 34-year-old founder, chief executive and controlling stockholder.
But it is Sheryl Sandberg, former chief of staff at the US treasury under Larry Summers and the woman brought in a decade ago to be the “adult” in Facebook’s executive ranks, who is largely taking the heat for the company’s mounting operational, financial, political and public relations challenges.















Clearly, Sandberg has much to account for as chief operating officer. Facebook’s travails, which have seen it shares drop nearly 40% since their July peak, are not Sandberg’s alone to carry, though on some days it appears the 49-year-old has been doing much of the heavy lifting.
“There’s little doubt the company is facing critical challenges and has made some egregious mistakes,” says Kathryn Kolbert of the Athena Centre for Leadership Studies. “The fact that Sandberg was brought in to be the adult in the room does not absolve Zuckerberg of responsibility.
“Mark Zuckerberg is the CEO of a multibillion-dollar company, and he’s been at it a while. He’s a grown-up. He ought to be responsible. But from what I see, there isn’t the sense that both should be accountable.”

Five weeks ago, Sandberg’s key role in shaping the company’s response to multiple crises was exposed, again by the New York Times. These have included the revelations of Russian interference in the 2016 election, the Cambridge Analytica scandal, and the decision to hire a rightwing opposition research company, Definers Public Affairs, to apply aggressive political campaign tactics to Facebook’s PR and to look into the finances of high-profile investor George Soros days after he publicly criticised the big US technology companies.
Facebook claimed that the research into Soros “was already under way when Sheryl sent an email asking if Mr Soros had shorted Facebook’s stock”.
However, the backlash against Sandberg, until recently a figurehead for tech-branded progressive feminism, has barely relented.
The bestselling author, who just a year ago was riding high on the success of Option B, a follow-up to her empowerment manual Lean In, is taking hits from all sides.
Sandberg, as the executive who helped develop Google’s ad-supported business strategy before joining Facebook, was in the firing line in September when the company became the focus of an American Civil Liberties Union complaint alleging that its advertising system allows employers to target job ads based on gender.






Three weeks ago, before a sold-out audience at the Barclays Centre indoor arena in Brooklyn, former first lady Michelle Obama said Sandberg’s belief that women can always “have it all” if they assert themselves across their personal and professional lives – a key tenet of Sandberg’s Lean In philosophy– is “a lie”.

“It’s not always enough to lean in because that shit doesn’t work all the time,” Obama reportedly said.
Then last week the civil rights group NAACP launched a week-long boycott of Facebook after a report it had commissioned highlighted concerns over voter suppression, ad targeting and the company’s own issue with workplace diversity.
“We know that we need to do more: to listen, look deeper and take action to respect fundamental rights,” Sandberg said in a conciliatory statement.


 Mark Zuckerberg, chief executive officer and founder of Facebook, at a technology gathering in Paris in May. Photograph: Christophe Morin/IP3/Getty Images
According to Nathalie Molina Niño, author of Leapfrog: The New Revolution for Women Entrepreneurs, part of the hostility aimed at Sandberg is certainly related to her gender. “The higher a woman gets in terms of success, the greater the culture that enjoys taking her down,” Niño says. Indeed, negative posts on Sandberg’s own Facebook page are largely written by men.
At the same time, Niño points out, Lean In missed the mark because it failed to reflect the experience of most women who are balancing work and family.
As a result, Sandberg has become synonymous with a particular brand of female empowerment that is considered out of touch with notions of inclusiveness.
“It’s applicable only to women in the corporate world and that’s a fairly small, marginal group,” Niño says. What Lean In showed, in fact, “is in contrast to what is true for most women, and the backlash against Sandberg is a reflection of that reality”.
But Sandberg is not standing back. It is a measure of her resilience, as well as solid support from Zuckerberg and Facebook’s board, that she has stayed put.

In an interview with the news network CNN, Zuckerberg said: “Sheryl is a really important part of this company and is leading a lot of the efforts to address a lot of the biggest issues that we have. She’s been an important partner for me for 10 years ... I hope that we work together for decades more to come.”
While Sandberg is taking the heat for Facebook’s problems, Zuckerberg appears to be relatively unscathed. “The company is facing incredible challenges and has made egregious mistakes, so Zuckerberg should bear primary responsibility,” says Charles Elson, expert in corporate governance at the University of Delaware.
Forcing Sandberg out, he says, would solve the perception that the company is taking action, but achieve nothing in terms of resolving the seemingly insurmountable issue of policing the user content of a global social network.














“The public wants somebody to take the fall, and since Zuckerberg is the owner he’s not going to do it. So they’ve come to the view that Sandberg is the next best thing.”
But that risks a potential new PR backlash by pushing Sandberg out without solving any of the company’s data privacy and political manipulation issues.
If Sandberg departs, her brand too tarnished to be of further use to Facebook, the decision will be Zuckerberg’s to make. “As the majority voting shareholder, he calls the shots,” Elson points out.

Facebook’s terrible year

17 March
The Observer and New York Times reveal that Facebook accidentally allowed consulting firm Cambridge Analytica to gather members’ data for political purposes. The number of users is later put at 87 million.

10-11 April
Founder Mark Zuckerberg testifies before the Senate judiciary and commerce committees. He says Facebook “didn’t take a broad enough view of our responsibility, and that was a big mistake”.

3 June
The New York Times reports that Facebook struck agreements allowing phone-makers including Apple, Amazon, BlackBerry, Microsoft and Samsung to access users’ personal information.

26 July
Facebook’s share price plunges 20%, wiping $17bn off the value of Zuckerberg’s stock, after the company reveals that 3 million European users have quit.


5 September
Sandberg testifies before the Senate intelligence committee regarding efforts to prevent foreign states from spreading false information on social media.

28 September
Facebook announces that hackers used 400,000 accounts under their control to gain the access tokens of nearly 50 million Facebook users, in the firm’s largest data breach.

14 November
The New York Times reports alleged tactics by the firm to block scrutiny of Russian disinformation and hate speech distributed via Facebook.

15 November
Facebook creates an independent body to monitor offensive content. Zuckerberg says he now believes that Facebook “should not make so many important decisions about free expression and safety on our own”.

21 November
Facebook confirms it hired rightwing political research firm Definers Public Affairs to attack George Soros and undermine critics by publicising their links to him. Zuckerberg and Sandberg deny knowledge of the arrangement.

30 November
The New York Times reports that Sandberg asked Facebook communications staff to research Soros’s financial interests after he describes social media, and Facebook in particular – as “a menace to society”.

18 December
Maryland sues Facebook, claiming it failed to safeguard users’ data, exposing nearly half of the District of Columbia’s population to potential “manipulation for political purposes”.

18 December
The New York Times reveals that Facebook shared user data with other tech giants more widely than previously known in a push for faster user and advertising growth.






$661,369
contributed


$1,000,000

our goal

In these critical times …

… The Guardian’s US editor John Mulholland urges you to show your support for independent journalism with a year-end gift to The Guardian. We are asking our US readers to help us raise $1 million dollars by the new year to report on the most important stories in 2019.
A note from John:
In normal times we might not be making this appeal. But these are not normal times. Many of the values and beliefs we hold dear at The Guardian are under threat both here in the US and around the world. Facts, science, humanity, diversity and equality are being challenged daily. As is truth. Which is why we need your help.
Powerful public figures choose lies over truths, prefer supposition over science; and select hate over humanity. The US administration is foremost among them; whether in denying climate science or hating on immigrants; giving succor to racists or targeting journalists and the media. Many of these untruths and attacks find fertile ground on social media where tech platforms seem unable to cauterise lies. As a result, fake is in danger of overriding fact.
Almost 100 years ago, in 1921, the editor of The Guardian argued that the principal role of a newspaper was accurate reporting, insisting that “facts are sacred.” We still hold that to be true. The need for a robust, independent press has never been greater, but the challenge is more intense than ever as digital disruption threatens traditional media’s business model. We pride ourselves on not having a paywall because we believe truth should not come at a price for anyone. Our journalism remains open and accessible to everyone and with your help we can keep it that way.
We want to say a huge thank you to everyone who has supported The Guardian so far. We hope to pass our goal by early January 2019. Every contribution, big or small, will help us reach it. Please make a year-end gift today to show your ongoing support for our independent journalism. Thank you.




After months of revelations about the firm, the executive is being talked of as a sacrifice, not founder Mark Zuckerberg 

Facebook Emails Show Its Real Mission: Making Money and Crushing Competition

Mark Zuckerberg, Facebook’s chief executive, played a central part in the emails disclosed on Wednesday in Parliament.CreditEtienne Laurent/EPA, via Shutterstock











Mark Zuckerberg, Facebook’s chief executive, played a central part in the emails disclosed on Wednesday in Parliament.CreditCreditEtienne Laurent/EPA, via Shutterstock











British lawmakers on Wednesday gave a gift to every Facebook critic who has argued that the company, while branding itself as a do-gooder enterprise, has actually been acting much like any other profit-seeking behemoth.
That gift was 250 pages’ worth of internal emails, in which Facebook’s executives are shown discussing ways to undermine their competitors, obscure their collection of user data and — above all — ensure that their products kept growing.
The emails, which span 2012 to 2015, were originally sealed as evidence in a lawsuit brought against Facebook by Six4Three, an app developer. They were part of a cache of documents seized by a British parliamentary committee as part of a larger investigation into Facebook’s practices and released to the public on Wednesday.
It should not come as a surprise that Facebook — a giant, for-profit company whose early employees reportedly ended staff meetings by chanting “domination!” — would act in its own interests.










Advertisement
But the internal emails, a rare glimpse into Facebook’s inner workings, show that the image the company promoted for years — as an idealistic enterprise more dedicated to “bringing the world closer together” than increasing its own bottom line — was a carefully cultivated smoke screen.
[Documents released in Britain show how Facebook used account data to favor some partners and punish rivals.]
Help us break the next big story.
Subscribe to The Times
These emails reveal that in the formative years of Facebook’s growth, the company’s executives were ruthless and unsparing in their ambition to collect more data from users, extract concessions from developers and stamp out possible competitors.
“It shows the degree to which the company knowingly and intentionally prioritized growth at all costs,” said Ashkan Soltani, a privacy researcher and former chief technologist of the Federal Trade Commission.
In a blog post on Wednesday, Facebook said the documents included in the lawsuit were a cherry-picked sample that “tells only one side of the story and omits important context.”










Advertisement

Here are four revelations from the emails that detail Facebook’s aggressive quest for growth:
1. The company engineered ways to collect Android users’ data without alerting them.
In February 2015, Facebook had a privacy dilemma.
The company’s growth team — a powerful force within Facebook — wanted to release an update to the Android app that would continually collect users’ entire SMS and call log history. That data would be uploaded to Facebook’s servers, and would help Facebook make better recommendations, such as suggesting new friends to Android users based on the people they’d recently called or texted. (This feature, called “People You May Know,” has been the subject of much controversy.)
But there was a problem: Android’s privacy policies meant that Facebook would need to ask users to opt in to having this data collected. Facebook’s executives worried that asking users for this data could bring a public backlash.
“This is a pretty high risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it,” one executive, Michael LeBeau, wrote.
He outlined the nightmare scenario: “Screenshot of the scary Android permissions screen becomes a meme (as it has in the past), propagates around the web, it gets press attention, and enterprising journalists dig into what exactly the new update is requesting, then write stories about ‘Facebook uses new Android update to pry into your private life in ever more terrifying ways.’”
Ultimately, Facebook found a workaround. Yul Kwon, the head of Facebook’s privacy program, wrote in an email that the growth team had found that if Facebook’s upgraded app asked only to read Android users’ call logs, and not request other types of data from them, users would not be shown a permission pop-up.
“Based on their initial testing, it seems that this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,” Mr. Kwon wrote.




















Advertisement
In a blog post on Wednesday, Facebook said that it collects call and text message logs only from Android users who opt in, and that as of 2018, it keeps this information only temporarily, since “the information is not as useful after about a year.”
2. Mark Zuckerberg personally approved cutting off a competitor’s data access.
In January 2013, one of Mr. Zuckerberg’s lieutenants emailed him with news about Twitter, one of Facebook’s biggest competitors. The company had introduced a video-sharing service called Vine, which allowed users to create and post six-second video clips.
When new users signed up for Vine, they were given the option of following their Facebook friends — a feature enabled through Facebook’s application program interface, or API. This feature was widely used, and had become a valuable tool for new apps to accelerate user growth. But in Vine’s case, Facebook played hardball.
“Unless anyone raises objections, we will shut down their friends API access today,” wrote the lieutenant, Justin Osofsky, now a Facebook vice president.
Mr. Zuckerberg, the chief executive, replied: “Yup, go for it.”
On Wednesday, Rus Yusupov, one of Vine’s co-founders, said on Twitter, “I remember that day like it was yesterday.”










Facebook’s decision to shut off Vine’s API access proved fateful. Months later, Instagram released its own short-form video feature, which many saw as a further attempt by Facebook to hobble Vine’s growth. Vine shut down in 2016, after stagnant growth and heavy competition led many of its stars and users to go elsewhere.










Advertisement
On Tuesday, Facebook changed its developer policies, ending the prohibition on apps that competed with the company’s own features.
3. Facebook used a privacy app to collect usage data about its competitors.
In 2013, Facebook acquired Onavo, an Israeli analytics company, announcing that Onavo’s tools “will help us provide better, more efficient mobile products.”
One of those tools, an app called Onavo Protect, was especially helpful in helping Facebook sniff out potential competitors. The app, which was billed to users as a way to keep their internet browsing private, also collected data about which apps those people used the most — including apps not owned by Facebook — and fed that information back to Facebook.
According to the emails released on Wednesday, Facebook executives received reports about the performance of rival apps, using data obtained through Onavo.
Sometimes, those reports revealed up-and-coming competitors. One report included in the email cache, dated April 2013, said that WhatsApp, the mobile messaging app, was gaining steam. According to Onavo’s proprietary data, WhatsApp was being used to send 8.2 billion messages a day, whereas Facebook’s own mobile app was sending just 3.5 billion messages daily.
Ten months later, Facebook announced that it was acquiring WhatsApp in a deal valued at $14 billion.
In August, Facebook pulled Onavo Protect from the App Store, after Apple reportedly said that it violated the company’s privacy rules.
4. Facebook executives wanted more social sharing, as long as it happened on Facebook.
In November 2012, Mr. Zuckerberg sent a lengthy note to several top executives called “Platform Model Thoughts.” It outlined how intensely he wanted Facebook to be the center of everyone’s social life online.










Advertisement
The email addressed a debate that was raging inside Facebook at the time, about whether outside app developers should have to pay to connect their apps to Facebook’s developer platform. Mr. Zuckerberg said that he was leaning away from a charge-for-access model, and toward what he called “full reciprocity” — giving third-party developers the ability to connect their apps to Facebook free, in exchange for those apps’ giving data back to Facebook, and making it easy for users to post their activity from those services on their Facebook timelines.
By giving away access, Mr. Zuckerberg said, Facebook could entice more developers to build on its platform. And by requiring app developers to send data back to Facebook, it could use those apps to increase the value of its own network. He wrote that social apps “may be good for the world but it’s not good for us unless people also share back to Facebook.”
Facebook later put in place a version of this “reciprocity rule” that required developers to make it possible for users of their apps to post their activity to Facebook, but did not require them to send usage data back to Facebook. (Not coincidentally, this “reciprocity rule” explains why for several years, it was virtually impossible to go on Facebook without seeing dozens of updates about what your friends were watching on Hulu or listening to on Spotify.)
In a Facebook post on Wednesday, after the emails were made public, Mr. Zuckerberg wrote that the company had tightened its developer policies in 2014 in order to protect users from “sketchy apps” that might misuse their data.
But back in 2012, the company’s worry was not about data misuse. Instead, the company was chiefly concerned with how to use those developers’ apps to spur its own growth.
Sheryl Sandberg, Facebook’s chief operating officer, wrote back to concur with Mr. Zuckerberg’s approach to data reciprocity.
“I think the observation that we are trying to maximize sharing on Facebook, not just sharing in the world, is a critical one,” she wrote.

 

Facebook Streaming All Episodes of Joss Whedon’s ‘Buffy the Vampire Slayer,’ ‘Angel,’ ‘Firefly’ for Free












Facebook is playing the nostalgia card in its latest bid to drive up video viewing and video ad sales: using TV reruns.
The social-media giant is launching every episode of Joss Whedon’s supernatural drama “Buffy the Vampire Slayer” and spinoff “Angel” along with sci-fi show “Firefly” on Facebook Watch for free to users the U.S. All 268 episodes of the shows will be available to watch starting Friday, Nov. 30, under a licensing pact with 20th Century Fox Television.
Facebook has set up dedicated show pages for each of the series: “Buffy the Vampire Slayer” will be available at this link; “Angel” is available at this link; and “Firefly” is streaming here.
The trio of shows, which aired on TV more than 15 years ago, is not exclusive to Facebook: All seasons of the three also are available on Hulu’s subscription service.

But Facebook believes the cult-favorite shows — particularly “Buffy” — will drive up watch time by letting fans experience the series in a brand-new, social way (along with the fact they’re free to watch). This week it expanded the Watch Party co-viewing feature to everyone on Facebook, making it possible for users to start Watch Parties from their Timeline or from any public video on Facebook. The company’s hope is that “Buffy,” “Angel” and “Firefly” will spawn thousands of Watch Party sessions.
“What we’ve been focused on Watch is building a people-centric video platform, creating a social viewing experience where you can connect with other people who love the shows, and even the creatives who worked on them,” said Matthew Henick, Facebook’s head of content planning and strategy for media partnerships.
The three shows from Joss Whedon in particular “have incredibly dedicated fanbases that have persisted and even grown online,” Henick said, noting that a TV reboot of “Buffy the Vampire Slayer” is in development at 20th Century Fox Television. “This is great content to experiment with.”
Facebook also is enlisting talent from the shows to promote the free streaming — including Sarah Michelle Gellar, star of the “Buffy the Vampire Slayer” series. On Friday, Gellar announced the free streaming of the show in a video on her Facebook Page, which has nearly 1 million followers, in which she underscored the platform’s co-viewing features to watch along with other “Buffy” fans on Facebook.
“It’s time to slay all day,” Gellar says in the announcement.
Other talent from “Buffy,” “Angel” and “Firefly” are expected to participate in live conversations via Watch Party, according to Facebook. Facebook has scheduled Watch Parties for each show: The “Buffy” co-viewing will kick off at 3 p.m. PT on Friday (Nov. 30); “Angel” will start on Dec. 1 at 12 p.m. PT; and “Firefly” will launch Dec. 2 at 12 p.m. PT.
Asked whether Facebook is seeking other TV show to bring the platform, Henick said, “I wouldn’t say we have a huge licensing plan… We have a balanced portfolio.”
First launched in the U.S. in August 2017, Facebook Watch includes original series and other content, live sports, and programming produced by independent creators. The deal with 20th Century Fox TV is “part of our larger strategy,” Henick said. “We have lots of different ways to get content onto the platform, and we’ll find a variety of ways to do that.”

Facebook is streaming all seven seasons of “Buffy the Vampire Slayer” (144 episodes), five seasons of “Angel” (110 episodes) and the single season of “Firefly” (14 episodes). The company declined to disclose how long the shows will be available on Facebook or discuss other terms. The episodes of the shows will include Facebook’s midroll ad breaks, with ads sold by Facebook.
Facebook has been trying to steer users to Watch, where it can monetize video ads more effectively compared with videos in users’ News Feeds. But according to a survey conducted this spring, half of U.S. adult Facebook users had never even heard of Facebook Watch.
“Video is a critical part of the future. It will end up being a large part of our business as well,” CEO Mark Zuckerberg told analysts on the company’s Q3 earnings call last month. He also admitted, “We are well behind YouTube.”
Initially, “Buffy the Vampire Slayer,” “Angel” and “Firefly” will be available on mobile and web platforms, with plans to later make them available on Facebook’s connected-TV apps.
“We want to make sure we’re presenting the best social experience to start, and where people interacting are on mobile and web,” Henick said. “What we’re looking for are meaningful connections between fans.”


The Decline and Fall of the Zuckerberg Empire




Mark Zuckerberg isn’t the first person in human history to draw inspiration from Augustus Caesar, the founder of the Roman Empire, but he’s one of a very few for whom the lessons of Augustus’s reign have a concrete urgency. Both men, after all, built international empires before the age of 33. “Basically, through a really harsh approach, he established 200 years of world peace,” Zuckerberg explained to a New Yorker reporter earlier this year. “What are the trade-offs in that?” Augustus, Zuckerberg explained, “had to do certain things” to ensure the stability of his empire. So too, apparently, does Facebook.

A 6,000-word report published in the New York Times last week disclosed in humiliating detail the lengths to which Facebook has gone to protect its dominance and attack its critics. As various interlocking crises concerning hate speech, misinformation, and data privacy widened, top executives ignored, and then kept secret, evidence that the platform had become a vector for misinformation campaigns by government-backed Russian trolls. The company mounted a shockingly aggressive lobbying and public-relations campaign, which included creating and circulating pro-Facebook blog posts that were functionally indistinguishable from the “coordinated inauthentic content” (that is, fake news) Facebook had pledged to eliminate from its platform. In one particularly galling example, the company hired a political consultancy that spread a conspiracy theory accusing George Soros of funding anti-Facebook protests. Zuckerberg, it seems, had taken the “really harsh approach” to establishing digital hegemony.

Augustus, at least, was a charismatic leader and confident ruler. No one at Facebook comes across in the Times piece as a similarly bold visionary. Not Joel Kaplan, Facebook’s top lobbyist, who encouraged the company to suppress and hold back findings of Russian influence campaigns for fear of alienating Republicans. Not Chuck Schumer, who confronted one of the Senate’s top Facebook critics and told him to figure out how to work with the company. (Schumer’s daughter works for Facebook.) Not Sheryl Sandberg, the adult-in-the-room COO who presided over the entire suspicious and hostile crisis response. And certainly not Zuckerberg, who seems to have been consistently absent — or plainly uninterested — during key meetings about Facebook’s handling of hate speech and misinformation. It’s hard to be a historical visionary hailed for brokering stability by making morally complex decisions if you can’t even be bothered to show up to the Morally Complex Decisions meetings.

Demands for the CEO to abdicate, or to at least step down from his role as chairman of the board, have increased, but Zuckerberg — who controls 60 percent of Facebook’s voting shares — is no more likely to resign than Augustus would have been. As the Wall Street Journal reports, he told company executives earlier this year that Facebook is at war. The trouble is that the war may have already been lost. Beset by stagnant growth, low employee morale, plummeting stock, public outrage, and a bipartisan group of enemies in government, the old Facebook, the ever-expanding, government-ignoring, world-conquering company of only a year or two ago, is gone.

Its own internal surveys bear this out: Facebook was once legendary for the cultish dedication of its employees — reporting on the company was nearly impossible because workers refused to leak — but employee confidence in Facebook’s future, as judged by internal surveys reported on by the Journal, is down 32 percentage points over the past year, to 52 percent. Around the same number of Facebook employees think the company is making the world a better place, down 19 points from this time last year, and employees report that they plan to leave Facebook for new jobs earlier than they had in the past. Scarier even for Facebook is the possibility, for which there is some anecdotal evidence, that it’s no longer a sought-after employer for top computer-science and engineering graduates.

There’s already ample evidence that Facebook is losing its hold on users. In the markets where Facebook is most profitable, its user base is either stagnant, as in North America, or actually shrinking, as in Europe. The company might be able to reassure itself that Instagram — which it wholly owns — is still expanding impressively, but the success of Instagram hasn’t stopped Facebook from getting punished on the stock market.

Facebook blames its attenuating European-user figures not on its faltering public image but on the European Union’s aggressive new privacy law, GDPR. But this raises a more troubling possibility for Facebook: that its continued success is dependent on a soft regulatory touch it can no longer expect from governments. What makes the Times revelations particularly dangerous to Zuckerberg’s empire is that they arrive at a moment when there is actually the political will to challenge its dominance. The fall of Facebook may not come after a long decline but through outside action — slapped with major fines and expensive investigations, chastened and disempowered by a new regulatory regime. “Facebook cannot be trusted to regulate itself,” Rhode Island representative David Cicilline — who will likely run the House Judiciary subcommittee on antitrust issues — tweeted last week.

In the Senate, skepticism regarding tech giants is enough of a bipartisan issue that there appears to be room for an agreement on data protection and user privacy. “I’m not looking to regulate [Zuckerberg] half to death,” Republican senator John Kennedy said earlier this year, “but I can tell you this: The issue isn’t going away.” It’s true that some Republican critics seem less concerned about Facebook’s overwhelming power than about the spurious claims of conservatives that their views are being suppressed on the platform, but there is genuine Republican interest in reining in Facebook. Action against big tech companies is a beloved topic of Steve Bannon and his wing of the GOP, and Trump himself, of course, has no particular affection for the company.

Trump’s Department of Justice, in fact, might represent Facebook’s biggest threat. The head of the Antitrust Division, Makan Delrahim, has been singing the praises of the famous DOJ Microsoft antitrust lawsuit. As Tim Wu, a former FTC adviser and the author of The Curse of Bigness: Antitrust in the Gilded Age, puts it, “whoever leads the case to break up Facebook will have the political winds and the public at his back.” A new Axios poll supports this assessment. Americans have reversed their opinions about social media over the past year, and a majority of Americans across the political spectrum now believe that social media hurts democracy and that the government isn’t doing enough to regulate it.

It’s the public outrage that should be most worrying to Facebook. Other tech giants have managed to escape the opprobrium directed at Facebook because they have obviously useful services. Amazon delivers things to your house. Google helps you find things online. Apple sells actual objects. Facebook … helps you get into fights? Delivers your old classmates’ political opinions to your brain?

Over the past year, I’ve spent time trying to wean myself off tech mega-platforms, generally with little success. Google’s search, for all my complaints, is still the best way for me to navigate the internet; Amazon is still so unbelievably convenient that the thought of quitting it exhausts me. But I logged out of Facebook more than a year ago and have logged back in fewer than a dozen times since. Checking Facebook had been a daily habit, but it also hadn’t improved my life or made itself necessary. Not many Roman plebes would have said that about the Pax Romana. Some empires fall because they’re invaded from the outside or rot from within. Zuckerberg’s could be the first in history to collapse simply because its citizens logged out.


Mark Zuckerberg has been fascinated by Augustus Caesar for years, and it raises some questions about the future of Facebook

 

Facebook CEO Mark Zuckerberg is a history buff at heart.
In a recent New Yorker profile, the tech mogul revealed that his fascination with the ancient Roman emperor Augustus even figured into his 2012 honeymoon in Rome.
"My wife was making fun of me, saying she thought there were three people on the honeymoon: me, her, and Augustus," Zuckerberg told the New Yorker. "All the photos were different sculptures of Augustus."
Zuckerberg's enthusiasm for classical history reportedly dates back to his time at Phillips Exeter Academy, where he studied Latin and immersed himself in learning about the civilization's "good and bad and complex figures." 

On his fascination with Augustus, Zuckerberg said, "Basically, through a really harsh approach, he established two hundred years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today. Two hundred years feels unattainable."
Zuckerberg's deep interest in another young upstart who disrupted — and connected — the world like never before doesn't come as a surprise. But it might raise some questions about how far the CEO is willing to go in order to achieve Facebook's mission to "bring the world closer together."
Because, as Facebook's controversial role in the 2016 US election demonstrated, whether you're uniting the world through conquest or clicks, everything comes at a price.

Augustus' triumph came at the cost of the Roman Republic

Before Augustus was declared the first citizen of Rome or the son of the divine or a god among men, he was just a teenager named Octavian.
Granted, he was the adopted son of the powerful dictator Julius Caesar. But he wasn't the only power player on the block in the bloody political circus that followed his adopted dad's assassination.
The young man fared well, however. He accrued power and successfully waged war against the assassins, and, eventually, his early allies, like Mark Anthony.
Octavian's victory in the 31 BCE Battle of Actium sank the hopes of Mark Anthony, Cleopatra, and their supporters — the power couple committed suicide shortly after the loss. And so the path was cleared for Octavian — who eventually took on the honorific "Augustus" — to become the sole ruler of Rome.
Let's be clear — Augustus didn't single-handedly murder the Roman Republic. Nor was the Republic some sort of perfect, equitable utopia. The entire system was falling apart long before Augustus came onto the scene. And the republican facade endured during his reign.
But his rule marked the death knell of the Republic and the dawn of the Roman Empire.

The Pax Romana wasn't entirely peaceful, either

Augustus gets a ton of credit for the Pax Romana — or "Roman peace."
And, sure, his reign did kick off a period of relative calm that stretched from the beginning of his rule in 27 BCE all the way into the reign of his five successors, the Five Good Emperors.
Before his time, the Roman Republic had been roiled by a number of civil wars: Rome faced down its various Italian allies during the Social War; generals Marius and Sulla wrestled for control; Julius Caesar squared off against his rival Pompey. And, of course, Augustus himself seized power through violence, and snuffed out his rivals along the way.
But it's inaccurate to think of the Roman Empire during the subsequent Pax Romana as as war-free zone.
In "Rome's Fall and After," historian Walter Goffart writes, "The volume of the Cambridge Ancient History for the years A.D. 70-192 is called 'The Imperial Peace,' but peace is not what one finds in its pages."
There were revolts in Judea, Mauretania, and Illyricum during Augustus' reign, alone. He also annexed Egypt and northern Spain during his stint as emperor.

Reflecting on the darker side of Augustus' rise to power

Comparatively, it's fair to say the Pax Romana did represent a time of peace for Rome. But, as historian Arnaldo Momigliano wrote in the Journal of the Warburg and Courtauld Institutes, "Pax Romana is a simple formula for propaganda, but a difficult subject for research."
What's more, there was a dark side to the Roman Empire's very definition of peace. Ancient Romans didn't think of peace as some sort of tranquil kumbaya-fest between nations. According to Momigliano, they conceived of peace more as a state in which all of Rome's rivals had been vanquished.
In the New Yorker interview, Zuckerberg rightfully concluded that the Pax Romana "didn't come for free" and vaguely acknowledged that Augustus "had to do certain things" in order to secure the peace.
Today, people around the world are beginning to question the impact that tech giants like Facebook are having on democratic societies. That's not to say that Zuckerberg is a calculating ancient despot like Augustus. But social media platforms are having a real impact on the political realm.
Heck, the title of the New Yorker profile in which he's quoted is "Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy?" 

John Naughton at the Guardian has a perfect—albeit obvious—observation: Despite their overwhelming dominance, Facebook and Apple will eventually fall. "History should teach us that for today's technology industry titans, the only way is down." That goes for Google, too. And Amazon. It's inevitable.
Naughton's argument—familiar to historians and Guns 'n Roses fans alike—is that nothing lasts forever. He realized aptly that truism applied to today's tech giants after rereading Paul Kennedy's The Rise and Fall of the Great Powers, which chronicles the history of Rome, Imperial Spain, and Britain. All of them reached world domination and, eventually, the then-unthinkable happened: they fell. They started to crumble and, eventually, fade into history, losing more and more political and cultural relevance as time passed by. The farther their falling is, the more irrelevant they are. Ask Spain, Rome, or the UK. Each fell because they tried to control it all. They tried to impose their ways, close their walls and command commerce, and thought their way, their culture was the only answer to everything. In that confidence, and with absolute power, they thought nobody else could compete with them. And they failed.
But why look at ancient history? Might as well ask companies like Ford, Microsoft, or Kodak, all of whom once dominated the market they had created, and set trends for decades to come. But ultimately, they faded into irrelevance.
Some of them went bankrupt, like Kodak. Some, like Ford or Microsoft, became just another face in the crowd, still big but no longer setting the agenda, no longer dominating 90- or 95-percent of everything. Some were superseded with new markets that made their markets obsolete, thus killing their power—like the fall of the PC in favor of phones and tablets. Some just lost relevance, like Kodak's chemical business replaced by the ascendance of digital photography.
When these companies were dominant, it was inconceivable that they would ever not be. Apple, Amazon, Google, and Facebook won't be any different. They will not fall at the same time, but they all will. From his article:
Although the eclipsing of Apple and Facebook is inevitable, the timing and causes of their eventual declines will differ. Apple's current strength is that it actually makes things that people are desperate to buy and on which the company makes huge margins. The inexorable logic of the hardware business is that those margins will decline as the competition increases, so Apple will become less profitable over the longer term. What will determine its future is whether it can come up with new, market-creating products such as the iPod, iPhone and iPad.
Facebook, on the other hand, makes nothing. It just provides an online service that, for the moment, people seem to value. But in order to make money out of those users and satisfy the denizens of Wall Street, it has to become ever more intrusive and manipulative. It's condemned, in other words, to intrusive overstretch. Which is why, in the end, it will become a footnote in the history of the internet. Just like Microsoft, in fact. Sic transit gloria.
It's not that Apple will disappear. With over $100 Billion in the bank and the ability to churn out desirable products with great margins, they will continue to be a successful, profitable company for a long, long time. Most probably, it will become a Sony. Relevant, but not dominant. Amazon, despite its razor-thin profit margins, is likely here to stay in some capacity. Google may turn into yet another Microsoft.

 As for Facebook... like Naughton says, Facebook may become completely irrelevant and disappear, because it doesn't make anything irreplaceable. As it continues to push the limits of privacy to satisfy the demands of advertisers and shareholders, its users will seek higher ground.
Facebook may be the first one to fall. But all of today's tech giants will follow its path into dull obsolescence, just like every empire before them. At least we can look forward to whatever takes their place.




The New York Times

April 2, 2018 

Facebook Security Breach Exposes Accounts of 50 Million Users
SAN FRANCISCO — Facebook, already facing scrutiny over how it handles the private information of its users, said on Friday that an attack on its computer network had exposed the personal information of nearly 50 million users.

The breach, which was discovered this week, was the largest in the company’s 14-year history. The attackers exploited a feature in Facebook’s code to gain access to user accounts and potentially take control of them.
The news could not have come at a worse time for Facebook. It has been buffeted over the last year by scandal, from revelations that a British analytics firm got access to the private information of up to 87 million users to worries that disinformation on Facebook has affected elections and even led to deaths in several countries.
Senior executives have testified several times this year in congressional hearings where some lawmakers suggested that the government will need to step in if the social network is unable to get tighter control of its service. On Friday, regulators and lawmakers quickly seized on the breach to renew calls for more oversight.
“This is another sobering indicator that Congress needs to step up and take action to protect the privacy and security of social media users,” Senator Mark Warner, a Democrat from Virginia and one of Facebook’s most vocal critics in Congress, said in a statement. “A full investigation should be swiftly conducted and made public so that we can understand more about what happened.”
In the conference call on Friday, Guy Rosen, a vice president of product management at Facebook, declined to say whether the attack could have been coordinated by hackers supported by a nation-state.
Three software flaws in Facebook’s systems allowed hackers to break into user accounts, including those of the top executives Mark Zuckerberg and Sheryl Sandberg, according to two people familiar with the investigation but not allowed to discuss it publicly. Once in, the attackers could have gained access to apps like Spotify, Instagram and hundreds of others that give users a way to log into their systems through Facebook.
[Read more about what you can do to secure your Facebook account.]
The software bugs were particularly awkward for a company that takes pride in its engineering: The first two were introduced by an online tool meant to improve the privacy of users. The third was introduced in July 2017 by a tool meant to easily upload birthday videos.
Facebook said it had fixed the vulnerabilities and notified law enforcement officials. Company officials do not know the identity or the origin of the attackers, nor have they fully assessed the scope of the attack or if particular users were targeted. The investigation is still in its beginning stages.
“We’re taking it really seriously,” Mr. Zuckerberg, the chief executive, said in a conference call with reporters. “I’m glad we found this, but it definitely is an issue that this happened in the first place.”
Critics say the attack is the latest sign that Facebook has yet to come to terms with its problems.
“Breaches don’t just violate our privacy. They create enormous risks for our economy and national security,” Rohit Chopra, a commissioner of the Federal Trade Commission, said in a statement. “The cost of inaction is growing, and we need answers.”
Facebook has been roundly criticized for being slow to acknowledge a vast disinformation campaign run by Russian operatives on its platform and other social media outlets before the 2016 presidential election.
Ms. Sandberg, Facebook’s chief operating officer, testified in a Senate hearing that month about what the company was trying to do to prevent the same thing from happening in midterm elections in November.
In April, Mr. Zuckerberg testified about revelations that Cambridge Analytica, the British analytics firm that worked with the Trump presidential campaign, siphoned personal information of millions of Facebook users.
Outside the United States, the impact of disinformation appearing on Facebook and the popular messaging service it owns, WhatsApp, has been severe. In countries such as Myanmar and India, false rumors spread on social media are believed to have led to widespread killing.
Facebook said the attackers had exploited two bugs in the site’s “View As” feature, which allows users to check on what information other people can see about them. The feature was built to give users move control over their privacy.
The company said those flaws were compounded by a bug in Facebook’s video-uploading program for birthday celebrations, a software feature that was introduced in July 2017. The flaw allowed the attackers to steal so-called access tokens — digital keys that allow access to an account.
It is not clear when the attack happened, but it appears to have occurred after the video-uploading program was introduced, Facebook said. The company forced more than 90 million users to log out early Friday, a common safety measure taken when accounts have been compromised.
The hackers also tried to harvest people’s private information, including name, sex and hometown, from Facebook’s systems, Mr. Rosen said. The company could not determine the extent of the attackers’ access to third-party accounts, he said.
Facebook has been reshuffling its security teams since Alex Stamos, its chief security officer, left in August for a teaching position at Stanford University. Instead of acting as a stand-alone group, security team members now work more closely with product teams across the company. The move, the company said, is an effort to embed security across every step of Facebook product development.
Part of that effort has been to gird Facebook against attacks on its network in preparation for the midterm elections. Facebook has spent months setting up new systems to pre-empt such attacks, and has already dealt with a number of incidents believed to be connected to elections in Mexico, Brazil and other countries.
Still, the recently discovered breach was a reminder that it is exceptionally difficult to entirely secure a system that has more than 2.2 billion users all over the world and that connects with thousands of third-party services.
“This has really shown us that because today’s digital environment is so complex, a compromise on a single platform — especially one as popular and widely reaching as Facebook — can have consequences that are much more far-reaching than what we can tell in early days of the investigation,” said April Doss, chairwoman of cybersecurity at the law firm Saul Ewing.
As the news of Facebook’s data breach spread quickly across Twitter, Google searches and other online sites, there was one place where it remained difficult to find some detailed reports: Facebook.
Users who posted breaking stories about the breach from The Guardian, The Associated Press and other outlets were prompted with a notice that their posts had been taken down. So many people were posting the stories, they looked like suspicious activity to the systems that Facebook uses to block abuse of its network.
“We removed this post because it looked like spam to us,” the notice said.
Follow Mike Isaac and Sheera Frenkel on Twitter: @MikeIsaac and @sheeraf.

Sheryl Sandberg Is Said to Have Asked Facebook Staff to Research George Soros














Sheryl Sandberg asked Facebook’s communications staff to research George Soros’s financial interests in the wake of his high-profile attacks on tech companies, according to three people with knowledge of her request, indicating that Facebook’s second in command was directly involved in the social network’s response to the liberal billionaire.
Ms. Sandberg, Facebook’s chief operating officer, asked for the information in an email to a senior executive in January that was forwarded to other senior communications and policy staff, the people said. The email came within days of a blistering speech Mr. Soros delivered that month at the World Economic Forum, attacking Facebook and Google as a “menace” to society and calling for the companies to be regulated.
Ms. Sandberg — who was at the forum, but was not present for Mr. Soros’s speech, according to a person who attended it — requested an examination into why Mr. Soros had criticized the tech companies and whether he stood to gain financially from the attacks. At the time, Facebook was under growing scrutiny for the role its platform had played in disseminating Russian propaganda and fomenting campaigns of hatred in Myanmar and other countries.













Facebook hired an opposition-research firm that gathered and circulated to reporters information about Mr. Soros’s funding of groups critical of the company.CreditSimon Dawson/Bloomberg, via Getty Images

Facebook later commissioned a campaign-style opposition research effort by Definers Public Affairs, a Republican-linked firm, which gathered and circulated to reporters public information about Mr. Soros’s funding of American advocacy groups critical of Facebook.

 Advertisement


Those efforts, revealed this month in a New York Times investigation, set off a public relations debacle for Ms. Sandberg and for Facebook, which was accused of trafficking in anti-Semitic attacks against the billionaire. Facebook quickly fired Definers.
The people with knowledge of Ms. Sandberg’s email asked for anonymity because they weren’t authorized to discuss the message and feared retribution.
In a statement, Facebook said that the company had already begun researching Mr. Soros when Ms. Sandberg made her request.
“Mr. Soros is a prominent investor and we looked into his investments and trading activity related to Facebook,” the company said. “That research was already underway when Sheryl sent an email asking if Mr. Soros had shorted Facebook’s stock.” The company said that while Ms. Sandberg “takes full responsibility for any activity that happened on her watch,” she did not personally direct any research on Freedom from Facebook, an anti-Facebook coalition whose members were among the subjects of Definers’ later work.














Eddie Vale, a spokesman for Freedom from Facebook, said he was skeptical of the company’s account.
“In light of Sandberg’s continuously changing story on the Soros research, there’s no way their denials about attacking other critics can be taken at face value,” Mr. Vale said. “Facebook must immediately release any emails and any research about targeting the Freedom from Facebook coalition or any member organizations.”
The revelation complicates Ms. Sandberg’s shifting explanations of her role in Facebook’s decisions to hire Definers and go on the offensive against the social network’s growing legion of critics. Ms. Sandberg at first denied knowing that Facebook had hired Definers, before acknowledging in a post last week that some of the company’s work for Facebook had crossed her desk.















Elliot J. Schrage, who oversaw Facebook’s communications team and is leaving the company, previously took responsibility for hiring the firm that looked into Mr. Soros.CreditJames Lawler Duggan/Reuters


In that post, Ms. Sandberg did not explicitly deny that she had asked for research into Mr. Soros. Instead, a deputy who oversaw the communications team but is now leaving the company, Elliot J. Schrage, took responsibility for hiring Definers and initiating Definers’ investigation into Mr. Soros. It is unclear what, if any, involvement Ms. Sandberg had in that ultimate response to Mr. Soros.
“We had not heard such criticism from him before and wanted to determine if he had any financial motivation,” Mr. Schrage said of Mr. Soros. “Definers researched this using public information.”
Facebook has defended its inquiries into Mr. Soros as a prudent and necessary step for any public company under attack by a high-profile figure — particularly one like Mr. Soros, a onetime currency trader who made a fortune in the 1990s betting against the British pound.
But the revelations are likely to escalate pressure on Ms. Sandberg, an embattled Silicon Valley star and feminist author.














Facebook Says It Deleted 865 Million Posts, Mostly Spam

















Image
Facebook published numbers for the first time detailing how much and what type of content it removes from the social network.CreditJason Henry for The New York Times

SAN FRANCISCO — Facebook has been under pressure for its failure to remove violence, nudity, hate speech and other inflammatory content from its site. Government officials, activists and academics have long pushed the social network to disclose more about how it deals with such posts.
Now, Facebook is pulling back the curtain on those efforts — but only so far.
On Tuesday, the Silicon Valley company published numbers for the first time detailing how much and what type of content it takes down from the social network. In an 86-page report, Facebook revealed that it deleted 865.8 million posts in the first quarter of 2018, the vast majority of which were spam, with a minority of posts related to nudity, graphic violence, hate speech and terrorism.
Facebook also said it removed 583 million fake accounts in the same period. Of the accounts that remained, the company said 3 percent to 4 percent were fake.
Guy Rosen, Facebook’s vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content. The inaugural report was intended to “help our teams understand what is happening” on the site, he said. Facebook hopes to continue publishing reports about its content removal every six months or so.



























Yet the figures the company published were limited. Facebook declined to provide examples of graphically violent posts or hate speech that it removed, for example. The social network said it had taken down more posts from its site in the first three months of 2018 than it had during the last quarter of 2017, but it gave no specific figures from previous years, making it hard to assess how much it had stepped up its efforts.
The report also did not include all the posts that Facebook had removed. After publication of this article, a Facebook spokeswoman said other types of content had been taken down from the site in the first quarter because they violated community standards, but those were not detailed in the report because the company was still developing metrics to study them.



























Facebook also used the new report to advance a push around artificial intelligence to root out inappropriate posts. Facebook’s chief executive, Mark Zuckerberg, has long highlighted A.I. as the main solution to helping the company sift through the billions of pieces of content that users put on its site every day, even though critics have asked why the social network cannot hire more people to do the job.
“If we do our job really well, we can be in a place where every piece of content is flagged by artificial intelligence before our users see it,” said Alex Schultz, Facebook’s vice president of data analytics. “Our goal is to drive this to 100 percent.”
Facebook is aiming for more transparency after a turbulent period. The company has been under fire for a proliferation of false news, divisive messages and other inflammatory content on its site, which in some cases have led to real-life incidents. Graphic violence continues to be widely shared on Facebook, especially in countries like Myanmar and Sri Lanka, stoking tensions and helping to fuel attacks and violence.



























Facebook has separately been grappling with a data privacy scandal over the improper harvesting of millions of its users’ information by political consulting firm Cambridge Analytica. Mr. Zuckerberg has said that the company needs to do better and has pledged to curb the abuse of its platform by bad actors.
On Monday, as part of an attempt to improve protection of its users’ information, Facebook said it had suspended roughly 200 third-party apps that collected data from its members while it undertook a thorough investigation.
The new report about content removal was another step by Facebook to clean up its site. Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, said she welcomed Facebook’s numbers.
“It’s a good move and it’s a long time coming,” she said. “But it’s also frustrating because we’ve known that this has needed to happen for a long time. We need more transparency about how Facebook identifies content, and what it removes going forward.”
Samuel Woolley, research director of the Institute for the Future, a think tank in Palo Alto, Calif., said Facebook needed to bring in more independent voices to corroborate their numbers.






















































Image
Mark Zuckerberg, Facebook’s chief executive, has said the company needs to do better and has pledged to curb the abuse of its platform by bad actors.CreditAndrew Harnik/Associated Press

“Why should anyone believe what Facebook says about this, when they have such a bad track record about letting the public know about misuse of their platform as it is happening?” he said. “We are relying on Facebook to self-report on itself, without any independent vetting. That is concerning to me.”
Facebook previously declined to reveal its content removal efforts, citing a lack of internal metrics. Instead, it published a country-by-country breakdown of how many requests it received from governments to obtain Facebook data or restrict content from Facebook users in that country. Those figures did not specify what type of data the governments asked for or what posts were restricted. Facebook also published a country-by-country report on Tuesday.



























According to the new content removal report, about 97 percent of the 865.8 million pieces of content that Facebook took down from its site in the first quarter was spam. About 2.4 percent of that deleted content had nudity, Facebook said, with even smaller percentages of posts removed for graphic violence, hate speech and terrorism.
In the report, Facebook said its A.I. found 99.5 percent of terrorist content on the site, leading to the removal of roughly 1.9 million pieces of content in the first quarter. The A.I. also detected 95.8 percent of posts that were problematic because of nudity, with 21 million such posts taken down.
But Facebook still relied on human moderators to identify hate speech because automated programs have a hard time understanding context and culture. Of the 2.5 million pieces of hate speech Facebook removed in the first quarter, 38 percent was detected by A.I., according to the new report.
Facebook said it also removed 3.4 million posts that had graphic violence, 85.6 percent of which were detected by A.I.
The company did not break down the numbers of graphically violent posts by geography, even though Mr. Schultz said that at times of war, people in certain countries would be more likely to see graphic violence than others. He said that in the future, Facebook hoped to publish country-specific numbers.
The report also did not include any figures on the amount of false news on Facebook as the company did not have an explicit policy on removing misleading news stories, Mr. Schultz said. Instead, Facebook has tried to deter the spread of misinformation by removing spam sites that profit from advertisements that run alongside false news, and by removing fake accounts that spread them.






















































Correction: 
An earlier version of this article, using information provided by Facebook, referred incorrectly to the 3 to 4 percent of accounts on the social network that were fake. It is the percentage of Facebook accounts that were fake even after a purge of such accounts. It is not the percentage of Facebook accounts that were purged as being fake. The article also misstated how often Facebook hopes to publish reports about the content it removes. It is roughly every six months, not every quarter.

Tech #BigBusiness

Social Media Roundup: Facebook Cryptocurrency Rumor, Instagram Emoji Slider Scale, Snapchat Rollback


, Opinions expressed by Forbes Contributors are their own.
A group of social media icons on a mobile device (Photo by Alberto Pezzali/NurPhoto via Getty Images)

“Social Media Roundup” is a weekly roundup of news pertaining to all of your favorite websites and applications used for social networking. Published on Sundays, “Social Media Roundup” will help you stay up-to-date on all the important social media news you need to know.
Facebook
Leadership Team Reorganization
Facebook has reorganized its leadership teams this past week, according to Recode. This included shake ups at the parent company along with Instagram, Messenger and WhatsApp. One of the teams being created as part of the reorganization will be focused on blockchain technology. And Recode said that Facebook is structuring the company under three main groups, including apps, new platforms and infrastructure and central product services.

The apps division will be led by chief product officer Chris Cox. Facebook’s VP of Internet.org Chris Daniels will be overseeing the development of WhatsApp following the departure of Jan Koum. And Stan Chudnovsky will be the head of the Messenger team. David Marcus is moving from the head of Messenger to the team that is heading up blockchain initiatives. And Will Cathcart is going to focus on the main Facebook app.
The new platforms and infrastructure team will be headed up by CTO Mike Schroepfer. Reporting to Schroepfer includes Andrew “Boz” Bosworth (head of AR, VR and hardware teams), David Marcus (blockchain initiatives), Jay Parikh (head of team involved in privacy products and security initiatives), Kang-Xing Jin (head of Facebook Workplace) and Jerome Pesenti (head of artificial intelligence).
And the Central Product Services arm is going to be led by Javier Olivan. This division will handle ads, security and growth. Olivan will be managing Mark Rabkin (head of ads and local efforts), Naomi Gleit (community growth and social good) and Alex Schultz (growth marketing, data analytics and internationalization).
Adam Mosseri is moving from the News Feed to Instagram as the VP of Product. And the previous VP of Product at Instagram, Kevin Weil, is moving to the new blockchain team.
Cryptocurrency
According to Cheddar, Facebook is rumored to be considering its own cryptocurrency. It is believed that Facebook’s cryptocurrency would be used specifically for facilitating payments on the social network.
And Facebook is also looking into ways to utilize the digital currency using blockchain technology. This rumor coincides with Facebook’s decision to have David Marcus head up a blockchain division at Facebook.
Malicious Ads Purchased By Russians Released By Congress
According to USA Today, Democrats on the House Intelligence Committee have released the thousands of Russian Facebook ads last week. The Russian ads were used to influence tensions among Americans during and after the 2016 U.S. presidential election. The ads were bought by Internet Research Agency, which is an organization allegedly linked to the Kremlin. Facebook responded to this malicious content by restricting political ads and requiring the organizations purchasing them to be disclosed.
A large portion of the ads were set up by Russians pretending to be Americans. And many of those ads had simply exploited divisive issues like immigration, race, gay rights and gun control to drive animosity between groups of people especially in states like Michigan, Pennsylvania, Virginia and Wisconsin.
Some of the ads were ineffective while others were seen over a million times. The ads started to run over two years starting around June 2015 and then increased in volume as the election drew closer.
Once Facebook turned the ads over to Congress, dozens of them were made public. And House Intelligence Committee leaders at the time said that all of the ads will be made public to increase awareness of the manipulation pushed by the Russian organization.
In a blog post, Facebook said it has started to deploy new tools and teams to identify threats proactively in the run-up to specific elections. Currently, Facebook is tracking over 40 elections. Going forward, Facebook has to tread carefully about how data is being handled considering it is still recovering from the Cambridge Analytica scandal in which personal details of 87 million users were exploited.
New Facebook Live Tools
Facebook pointed out that daily average broadcasts from verified publishers Pages increased 1.5X over the past year. And this past week, Facebook product manager Matt Labunka said new features are being rolled out to make it easier for publishers to go live.
Live API Update:
Facebook has made the setup process easier for users that frequently utilize the Live API. “Publishers and creators who frequently use the Live API have requested a more simplified stream setup process, and we've rolled out the ability to use a persistent stream key with an encoder when going live on Facebook,” wrote Labunka. “This means if you're a publisher or creator that goes live regularly, you now only need to send one stream key to production teams, and because a Page's stream key is permanent, it can be sent in advance of a shoot — making it easier to collaborate across teams and locations for live productions. Broadcasters can also save time by using the same stream key every time they start a new Live video.”
An example of where this has saved some time is how gaming creator Darkness429 goes live at 3PM every week day. Having a persistent stream key made this process easier for him.
Crossposting:
Facebook also launched Live Crossposting. This feature allows Pages to seamlessly publish a single broadcast across multiple Pages at the same time. And it will be displayed as an original post by each Page. Doing this would enable the Live stream to reach a broader audience.
Live Rewind:
Facebook is currently testing the ability for viewers to rewind Live videos as they are streaming live from Pages. Facebook said that CrossFit Games said that this feature would be “massive” for its viewers. “They have different points of discovery, want to go back, or miss a key play... It’s huge,” said CrossFit Games via Facebook. Once testing is completely, this feature should be available for all of Facebook’s users.
Instagram
Stories Soundtrack Test
Instagram is reportedly testing a feature that would allow users to add music to Stories based on code that was found within its Android app, according to TechCrunch. The “music stickers” would essentially allow users to search for music and add song clips to posts. This is made possible through Facebook’s partnership with music labels. Plus Instagram is testing the ability to automatically detect a song that you are listening to in the background and automatically create a sticker with the artist and song information.
Jane Manchun Wong was briefly able to test out the feature:
DM Improvements For Businesses
Instagram

Instagram DMs
Instagram now makes it easier for businesses to manage direct messages through its platform. Businesses will now see important customer messages in the main Direct inbox rather than the pending folder. And businesses will be able to star and filter conversations to follow up on. Plus Instagram is currently testing quick replies so businesses can easily respond to common questions.
Emoji Sliding Scale
Instagram

Instagram Emoji Slider
Instagram has launched a new emoji polling slider feature for Stories that allows your friends to rate content on a scale rather than the standard yes/no buttons. The emoji animates as you drag it back and forth on the scale.
"To add an emoji slider sticker to your story, select it from the sticker tray after taking a photo or video. Place it anywhere you’d like and write out your question," said Instagram in a blog post. "Then, set the emoji that best matches your question’s mood. You can pick from a few of the most popular emoji, or choose almost any emoji from your library if you have something specific in mind."
Here is a video of how it works:
Klout
Shut Down
Lithium has announced that it is going to be shutting down Klout, the website that scored the influential power of social media users. This became known as the Klout Score. It was reported that Lithium had acquired Klout for $200 million back in March 2014. And Klout confirmed the shut down on Twitter:
Slack
8 Million DAUs And 3 Million Paid Users
TechCrunch reported this past week that workplace collaboration company Slack has hit 8 million daily active users (DAUs) and 3 million paid users. This is up from September when Slack was reportedly hitting 6 million DAUs, 2 million paid users and $200 million in annual recurring revenue. Over half of Slack’s users are outside the U.S.
Snap
Tim Stone Named CFO
Snap's chief financial officer Drew Vollero is being succeeded by Tim Stone. Stone is a former VP of finance at Amazon who has a background in digital content. Vollero is going to pursue other opportunities and will remain as a paid “non-employee advisor” until August 15th to help with the transition.
“I am deeply grateful for Drew and his many contributions to the growth of Snap,” said CEO Evan Spiegel in a statement. “He has done an amazing job as Snap's first CFO, building a strong team and helping to guide us through our transition to becoming a public company. The discipline that he has brought to our business will serve us well into the future. We wish Drew continued success and all the best.”
According to CNBC, Stone’s salary will be $500,000 and he will receive restricted stock units with a value of $20 million and 500,000 in options subject to time-based vesting.
Redesign Rollback Begins
Snap is starting to roll back its redesign on the Snapchat app. The redesigned Snapchat app was not very popular as over 1.2 million people signed a petition to go back to the original design. 
The latest design makes Snaps and Chats show up in chronological order again. And the Stories have been moved back to the right-hand side of the app again. One thing that will be retained from the redesign is that Stories from your friends will be separated from brands. And there is a separate Subscriptions feed which can be searched. 
The updated design will be coming to iOS first. But it is unknown when the rollback will happen on Android. 
Twitter
Encrypted Messaging Feature
Twitter is believed to be testing an encrypted messaging feature that would compete against services like WhatsApp, Telegram and Signal. Here is a tweet that Jane Manchun Wong wrote about the rumored service:
WhatsApp
Facebook And Instagram Videos Are Now Playable Within The App
WhatsApp now has ability to play Facebook and Instagram videos within the app. So when your contacts send you these types of videos, you can watch it without having to leave WhatsApp. WhatsApp already offers the ability to watch YouTube videos within the app without having to switch over to the YouTube app.
This feature is already available in the iOS version of WhatsApp, but it is not available on the Android version yet. The updated iOS version of WhatsApp gives admins the ability to provide/revoke certain rights for other users in the group such as the ability to rename a group.
YouTube
$5 Million For Creators For Change
YouTube is investing about $5 million for the “Creators for Change” program, which will be provided to 47 creators. Of the 47 creators, 31 are new members. The creators will be sharing positive videos about global issues such as hate speech and xenophobia.
“As part of our $5M investment in this program, these creators will receive support from YouTube through a combination of project funding, mentorship opportunities, and ongoing production assistance at our YouTube Spaces,” said YouTube in a blog post. “They’ll also join us for our second annual Social Impact Camp at YouTube Space London this summer. The Social Impact Camp is an exclusive two-day-long camp featuring inspirational speakers, video production workshops, and mentorship opportunities with experts as well as time for the Ambassadors to connect with one another.”
“Take A Break” Notifications
If you find yourself spending a lot of time on YouTube, then you will be able to set up a suggestions to take a break. This is part of Google’s broader Digital Wellbeing initiative. YouTube’s “take a break” notifications shows a prompt when you have spent too much time on the service.
You can access this feature by tapping on your profile photo at the top right of the mobile app > Settings > General > “Remind me to take a break.” From there, you can select the choices: Never, every 15 minutes, every 30 minutes, every 60 minutes, every 90 minutes or every 180 minutes.

Read More:





What are your thoughts on this article? Send me a tweet at @amitchowdhry or connect with me on LinkedIn.

































Tech #BigData
What is this?

Ignite Your GPU Database Strategy By Addressing GDPR

Nima Negahban Nima Negahban , Kinetica
With just over a month until the European Union’s (EU) General Data Protection Regulation (GDPR) goes into effect, Facebook is moving its data controller entity from Facebook Ireland to Facebook USA, keeping more than 1.5 billion users out of the reach of the European privacy law. Mark Zuckerberg, who promised to apply the “spirit” of the legislation globally, is moving users located in Africa, Asia, Australia, and Latin America to sites governed by US law rather than European law.
Kinetica

The GDPR brings new responsibilities to organizations that store and process personal data.
Clearly, May 25—the day GDPR goes into effect—is a pivotal day that will have a global ripple effect well beyond Europe. It will impact how we manage and use data across the world in the Extreme Data Economy.
It does not, however, need to be viewed as a regulatory “tax” to avoid. As companies embrace business differentiating innovations, such as GPU databases, they can simultaneously meet the key requirements of GDPR.
Need a primer on a GPU database? Read a quick overview here.
The GDPR “was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy, and to reshape the way organizations across the region approach data privacy.” GDPR covers the entire EU and explicitly states that companies that fail to comply with the regulation are subject to a penalty up to 20 million euro, or 4% of global revenue, whichever is greater.
A major misconception is that the regulation applies to EU companies only; in actuality, the regulation applies to any company holding data from EU citizens.
With regards to an enterprise data strategy, there are a number of key considerations that must be addressed, including data profiling, the right to be forgotten, automated personal data processing, data pseudonymization, and data breaches. Each of these areas demands healthy consideration, balancing privacy concerns against innovation.
The GDPR exists because enterprises have not been thoughtful enough around data privacy, forcing governments (like the EU) to mandate change. Many of their offenses are much less dramatic than the salacious stories around companies like Facebook and Cambridge Analytica.
The GDPR forces us to think creatively about how to reconstitute the business to comply with regulation. Savvy enterprises will figure out how to meet these requirements by combining these efforts with new data innovation investments.
For instance, NVIDIA (NVDA) GPUs are redefining how companies translate data into insight, leveraging the massive parallel processing power of  GPUs rather than CPUs. This has created a new category of GPU infrastructure, including a GPU database, to revolutionize data practices.
From a business perspective, GPU database technology accomplishes several things. A GPU database dramatically accelerates analysis of billions of rows of data, with an in-memory GPU architecture that speeds parallel processing. It can deliver results in milliseconds. It provides near-linear scalability without the need to index. It can take geospatial and streaming data and turn it into visualizations that reveal interesting patterns and business opportunities, capitalizing on the GPU’s particular aptitudes, including rendering the visuals themselves. GPU databases have seamless machine learning capabilities, enabling organizations to easily leverage Google’s popular Tensorflow and other AI frameworks via User Defined Functions that analyze the complete set of data. In short, the GPU foundation is a massive opportunity to build a data-powered architecture that not only allows businesses to do more with data, but also helps align with GDPR regulations.
A GPU database can also help a business comply with GDPR regulation:

  1. Breach Notification. A key requirement of GDPR is for a business to notify relevant authorities of data breaches within 72 hours of becoming aware of an attack. GPU databases arm businesses with the ability to do brute force analysis of billions of rows of data in real-time. The power of the GPU database is the ability to not only look at batch data, but also real-time streaming data. It provides organizations with blazing-fast analytics, the ability to conduct more complex analysis than traditional BI tools, and a “bigger brain” to run machine learning algorithms across constantly changing data sources. In short, the GPU database provides a more powerful means to assess risk of breach, making it easier to identify breaches and remediate within shorter periods of time.
  2. Bias & Profiling. The GDPR prohibits using personal data that “reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.” Data scientists analyzing data involving personal data can no longer work on homegrown data science platforms built “off the grid.” Given that a GPU database architecture enables data scientists to access a centralized engine where data is managed, businesses can eliminate data science sprawl and implement a centralized data architecture and workflow for governance.
  3. Data Lineage & Auditability. Under GDPR, data scientists must be able to identify where data is generated and provide an audit trail of where it resides. With a GPU database architecture, data can be assigned a unique identifier and an audit trail can be produced identifying the in-memory GPU where data is pinned. This enables businesses to track the data lifecycle and maintain a comprehensive audit trail of where it was used.
  4. 360-Degree View of Business. In order to meet GDPR obligations, you need to know, at all times, what sensitive data you are collecting and all the places it is stored. A GPU-database allows companies to visualize, analyze, and generate insight around batch data, streaming data, IoT data, location-based data, and many other unpredictable sources. The ability to visualize the business in motion is critical to understanding how data is used across all divisions. This 360 degree view is critical to properly understanding an organization’s holistic data strategy and to identify anomalies. It also enables a business to more easily watch and track incoming personal data to address key GDPR requirements such as the right to be forgotten. Given the complexity of GDPR, it is critical that businesses paint a picture of where data is used and resides so they have the agility to address GDPR issues as they arise.
  5. Reduce attack surface with GPUs. A single NVIDIA Deep Learning System has 81,920 CUDA cores. The equivalent number of cores on a CPU would require 1,280 servers (81,920/64). The wider your attack surface for managing data, the more complex and challenging it is to meet GDPR requirements. Using GPUs to drive data consolidation simplifies the data architecture and makes it easier to be GDPR compliant.
The GDPR brings new responsibilities to organizations that store and process personal data. The journey to compliance should not be viewed as an effort to avoid penalties. It is an opportunity to reconstitute an organization's data strategy so that they can profit from the Extreme Data Economy.
Style & Design #CelebrityMoney

What You Need To Know About The American Idol Live! 2018 Tour


Check out great deals, new products and gift ideas. Opinions expressed by Forbes Contributors are their own.
Forbes Finds Forbes Finds , Contributor
American Idol Eric McCandless/ABC via Getty Images



The Top 7 finalists perform two songs this week, battling it out for Americas vote to make it into the Top 5 on May 6.
Get your rowdy cheers ready, because American Idol is likely coming to a city near you. Following a long-awaited return to television in 2018 after a two-year hiatus (and a network change from Fox to ABC), the popular show is taking a summer road trip with the American Idol Live! 2018 tour. The 40+ city tour kicks off on Wednesday, July 11 in Redding, CA and wraps up on Sunday, September 16 in Washington DC.
The tour gives fans the chance to experience the talented vocals of this season’s 7 finalists Cade Foehner, Caleb Lee Hutchinson, Catie Turner, Gabby Barrett, Jurnee, Maddie Poppe and Michael J. Woodard in an electrifying live setting. The shows will also showcase 2018’s newly-crowned winner, and will be hosted by special guest Kris Allen, who many die-hard fans recognize as Season 8’s American Idol winner. To add to the excitement, In Real Life, winner of ABC's 2017 summer reality competition show Boy Band, will join in the fun on select dates. In Real Life currently has three hot singles on the airwaves: "Eyes Closed," "Tattoo (How 'Bout You)" and their first Spanish track "How Badly.”
American Idol has had an unbelievable run since first premiering on Fox in 2002. Its first 15 seasons on television attracted more than 40 million live viewers at one point, who tuned in every week to watch the show transform everyday Americans - albeit with incredible vocal gifts - from obscurity to stardom. Winners Carrie Underwood and Kelly Clarkson, and finalists Jennifer Hudson and Chris Daughtry are just a few of the show’s contestants who skyrocketed to fame to become household names after appearing on the show. However, the last time American Idol went on a live tour was in 2015, highlighting winner Nick Fradiani. This year’s tour will be managed by Jared Paul, a seasoned entertainment manager whose clients include New Kids on the Block. Paul has produced several touring productions of former television shows like “Glee,” “Dancing with the Stars” and “America’s Got Talent,” and will bring his management experience to the production of this year’s American Idol Live! 2018 tour.

Tickets went on sale Friday. Check out StubHub for tickets, but hurry, the dates will sell out fast.

For product reviews, gift ideas, and latest deals, Subscribe to the Forbes Finds newsletter.
 News Tip





Twitter is working on End-to-End Encrypted Secret DM!































_________________________________________________________________________________
  • https://media.wired.com/photos/5af5fbf8bd60ef5132936fd9/master/w_582,c_limit/zuck_18121749645852.jpg

    Facebook Just Tapped the Next Mark Zuckerberg
































    Mark Zuckerberg reorganized leadership of Facebook's product groups.
    Yichuan Cao/SIPA/AP
    Earlier this week, Facebook announced a grand reshuffle of its leadership. First reported by Recode, the new structure includes teams for the company’s apps, new platforms and infrastructure, and central product services. Most of the people moving into new roles have been with the company for close to a decade or longer, and many have proven themselves adept at the skill Facebook appears to value over all others: growth.

    Chris Cox Is the New Mark Zuckerberg

    If there were ever a question as to who would step in to fill Zuckerberg’s shoes should something happen to him, it has been resolved. With his new role as head of the company’s family of apps—Instagram, WhatsApp, Messenger and the tried and true Big Blue (aka Facebook)—Facebook’s chief product officer is stepping out as the leader he has long been internally. Anyone paying close attention knows this already.
    Cox, who is very close friends with Zuckerberg, dropped out of a Stanford graduate program to join Facebook in 2005. He’s done a lot of jobs since. When I first met him in 2008, he was the 25-year-old head of resources who zipped around the office on a ripstik. An engineer by training, he helped invent news feed and was the star of the video Facebook showed investors in the run-up to its initial public offering. Cox is a brilliant public face for the company because he pairs engineering rigor and Facebook history with an emotive voice that Zuckerberg sometimes lacks. (Check out his F8 keynote from 2011 to see this in action.)
    Cox’s new role also suggests Facebook will integrate Instagram and WhatsApp more deeply into the company, now that they’re organizationally closer to Messenger and Facebook. This may have contributed to WhatsApp cofounder Jan Koum’s departure last month.

    Javier Olivan Just Got a Lot More Important

    Of Zuckerberg’s three direct reports on the product side of the business, Olivan is the only one not currently on Facebook’s leadership page. The Spanish native arrived at Facebook after finishing his Stanford MBA in 2007 to run international growth, when the company had just 40 million users, most in the United States. The growth team is Facebook’s Navy SEALs, a special-operations force brought in when the company sees potential for a feature to take off and the stakes are high. Historically, most teams at Facebook have included one of Olivan’s direct reports.
    Olivan’s responsibilities now include ad products, analytics, and a group called “integrity, growth, and product management.” One could also read this as one way Zuckerberg is demoting ad products. Mark Rabkin, who is in charge, now reports to Olivan.

    Controversy Won't Stop WhatsApp’s New Boss

    Unlike its Instagram acquisition, Facebook’s acquisition of WhatsApp was never a great culture fit. Koum promised to keep WhatsApp ad-free, then sold it to an ad company. In March, cofounder Brian Acton, who’d already left Facebook to start a foundation, advised his 35,000 Twitter followers to #Deletefacebook. Then last month, Koum announced he was leaving his post as WhatsApp’s CEO and stepping off Facebook’s board.
    Now Chris Daniels, a seven-year Facebook veteran, steps in to replace Koum, eschewing the title as CEO of WhatsApp for a vice president title. It will be on Daniels, who will report to Cox, to sort out a business for the messaging service. He’s got the experience to take on the challenge. Until recently, Daniels ran Facebook’s internet.org initiatives around expanding access in developing countries. The largest of these projects is Free Basics, an app that offers access to free web services. Although telecom companies rejected the idea of partnering with Facebook to provide the Free Basics app early on—and India banned it in 2016—more than 80 carriers partner with Facebook to offer the service.

    Facebook Probably Has a Blockchain Plan

    David Marcus, who ran Facebook’s Messenger app, will now lead a team of fewer than a dozen people dedicated to blockchain technology. Kevin Weil, who was in charge of product at Instagram, is joining him along with James Everingham, who was in charge of engineering there. (WIRED’s Erin Griffith and Sandra Upson have some thoughts on what this means.)
    A board member of cryptocurrency wallet Coinbase with a lot of payments experience, Marcus has a history of leaving large posts to take up seemingly small projects. He was CEO of PayPal in 2014 when he left to run Messenger. The move was a head-scratcher: At the time, Messenger was a tiny messaging app that had failed to take off as an email replacement. But Zuckerberg had a plan to transform Messenger into a better version of WhatsApp (which, it should be said, he’d just bought), one that businesses could harness to reach users in new ways.

    Messenger’s Chief is a Growth Expert

    Replacing Marcus at Messenger is Stan Chudnovsky. He’s one of the newer members of the leadership bench, having arrived at Facebook in 2014. Like Marcus, Chudnovsky is a serial entrepreneur; he sold his last company, the software startup IronPearl, to PayPal before it was a year old. Chudnovsky’s nascent startup had been building growth tools for companies, and at PayPal he was head of growth. At Messenger, Chudnovsky worked closely with Marcus to grow Messenger into a service with more than 1.3 billion monthly active users.

    There Are Almost No Women Here

    A shamefully obvious aspect of the image of the org chart that Recode pieced together earlier this week is the paucity of female faces. In fact, there’s only one: Naomi Gleit, who now runs “integrity, growth and product management.” Gleit, who is Facebook’s longest tenured employee (she was #29), has long been part of the growth team at Facebook and was until recently the company’s “vice president of social good.” Gleit is a force to be reckoned with, no doubt. But this new structure raises questions about Zuckerberg’s commitment to building an inclusive workforce. For as much as the company has benefitted from the work its chief operating officer, Sheryl Sandberg, has done personally to promote women, it should be unacceptable for Zuckerberg to fill 13 of the company’s 14 most critical technical positions with men.

     

    Facebook is making its biggest executive shuffle in company history


WhatsApp, Messenger and Facebook’s core app are getting new leaders as part of a massive executive reorg. 

By

____________________________________

Facebook Replaces Lobbying Executive Amid Regulatory Scrutiny







































Image
Facebook has been scrambling to respond to intense scrutiny from federal regulators and lawmakers.CreditJason Henry for The New York Times

WASHINGTON — Facebook on Tuesday replaced its head of policy in the United States, Erin Egan, as the social network scrambles to respond to intense scrutiny from federal regulators and lawmakers.
Ms. Egan, who is also Facebook’s chief privacy officer, was responsible for lobbying and government relations as head of policy for the last two years. She will be replaced by Kevin Martin on an interim basis, the company said. Mr. Martin has been Facebook’s vice president of mobile and global access policy and is a former Republican chairman of the Federal Communications Commission.
Ms. Egan will remain chief privacy officer and focus on privacy policies across the globe, Andy Stone, a Facebook spokesman, said.
The executive reshuffling in Facebook’s Washington offices followed a period of tumult for the company, which has put it increasingly in the spotlight on Capitol Hill. Last month, The New York Times and others reported that the data of millions of Facebook users had been harvested by the British political research firm Cambridge Analytica. The ensuing outcry led Facebook’s chief executive, Mark Zuckerberg, to testify at two congressional hearings this month.









































Since the revelations about Cambridge Analytica, the Federal Trade Commission has started an investigation of whether Facebook violated promises it made in 2011 to protect the privacy of users, making it harder for the company to share data with third parties.
At the same time, Facebook is grappling with increased privacy regulations outside the United States. Sweeping new privacy laws called the General Data Protection Regulation are set to take effect in Europe next month. And Facebook has been called to talk to regulators in several countries, including Ireland, Germany and Indonesia, about its handling of user data.
You have 4 free articles remaining.
Subscribe to The Times
Mr. Zuckerberg said told Congress this month that Facebook had grown too fast and that he hadn’t foreseen the problems the platform would confront.
“Facebook is an idealistic and optimistic company,” he said. “For most of our existence, we focused on all the good that connecting people can bring.”
The executive shifts put two Republican men in charge of Facebook’s Washington offices. Mr. Martin will report to Joel Kaplan, vice president of global public policy. Mr. Martin and Mr. Kaplan worked together in the George W. Bush White House and on Mr. Bush’s 2000 presidential campaign.









































Facebook hired Ms. Egan in 2011; she is a frequent headliner at tech policy events in Washington. Before joining Facebook, she spent 15 years as a partner at the law firm Covington & Burling as co-chairwoman of the global privacy and security group.
Facebook is undergoing other executive changes. Last month, The Times reported that Alex Stamos, Facebook’s chief information security officer, planned to leave the company after disagreements over how to handle misinformation on the site.

 _____________________________________

 










































Google Knows Even More About Your Private Life Than Facebook




















































































________________________________________

 

Facebook releases long-secret rules on how it polices the service

MENLO PARK, Calif. (Reuters) - Facebook Inc (FB.O) on Tuesday released a rule book for the types of posts it allows on its social network, giving far more detail than ever before on what is permitted on subjects ranging from drug use and sex work to bullying, hate speech and inciting violence.
Facebook for years has had “community standards” for what people can post. But only a relatively brief and general version was publicly available, while it had a far more detailed internal document to decide when individual posts or accounts should be removed.
Now, the company is providing the longer document on its website to clear up confusion and be more open about its operations, said Monika Bickert, Facebook’s vice president of product policy and counter-terrorism.
“You should, when you come to Facebook, understand where we draw these lines and what’s OK and what’s not OK,” Bickert told reporters in a briefing at Facebook’s headquarters.
Facebook has faced fierce criticism from governments and rights groups in many countries for failing to do enough to stem hate speech and prevent the service from being used to promote terrorism, stir sectarian violence and broadcast acts including murder and suicide.
At the same time, the company has also been accused of doing the bidding of repressive regimes by aggressively removing content that crosses governments and providing too little information on why certain posts and accounts are removed.
New policies will, for the first time, allow people to appeal a decision to take down an individual piece of content. Previously, only the removal of accounts, Groups and Pages could be appealed.
Facebook is also beginning to provide the specific reason why content is being taken down for a wider variety of situations.
Facebook, the world’s largest social network, has become a dominant source of information in many countries around the world. It uses both automated software and an army of moderators that now numbers 7,500 to take down text, pictures and videos that violate its rules. Under pressure from several governments, it has been beefing up its moderator ranks since last year.
Bickert told Reuters in an interview that the standards are constantly evolving, based in part on feedback from more than 100 outside organizations and experts in areas such as counter-terrorism and child exploitation.
“Everybody should expect that these will be updated frequently,” she said.
The company considers changes to its content policy every two weeks at a meeting called the “Content Standards Forum,” led by Bickert. A small group of reporters was allowed to observe the meeting last week on the condition that they could describe process, but not substance.
At the April 17 meeting, about 25 employees sat around a conference table while others joined by video from New York, Dublin, Mexico City, Washington and elsewhere.
Attendees included people who specialize in public policy, legal matters, product development, communication and other areas. They heard reports from smaller working groups, relayed feedback they had gotten from civil rights groups and other outsiders and suggested ways that a policy or product could go wrong in the future. There was little mention of what competitors such as Alphabet Inc’s Google (GOOGL.O) do in similar situations.
Bickert, a former U.S. federal prosecutor, posed questions, provided background and kept the discussion moving. The meeting lasted about an hour.
Facebook is planning a series of public forums in May and June in different countries to get more feedback on its rules, said Mary deBree, Facebook’s head of content policy.









































Slideshow (2 Images)

FROM CURSING TO MURDER

The longer version of the community standards document, some 8,000 words long, covers a wide array of words and images that Facebook sometimes censors, with detailed discussion of each category.
Videos of people wounded by cannibalism are not permitted, for instance, but such imagery is allowed with a warning screen if it is “in a medical setting.”
Facebook has long made clear that it does not allow people to buy and sell prescription drugs, marijuana or firearms on the social network, but the newly published document details what other speech on those subjects is permitted.
Content in which someone “admits to personal use of non-medical drugs” should not be posted on Facebook, the rule book says.
The document elaborates on harassment and bullying, barring for example “cursing at a minor.” It also prohibits content that comes from a hacked source, “except in limited cases of newsworthiness.”

Facebook Inc158.4109
FB.ONasdaq -1.28(-0.80%)

FB.O
  • FB.O
  • GOOGL.O
The new community standards do not incorporate separate procedures under which governments can demand the removal of content that violates local law.
In those cases, Bickert said, formal written requests are required and are reviewed by Facebook’s legal team and outside attorneys. Content deemed to be permissible under community standards but in violation of local law - such as a prohibition in Thailand on disparaging the royal family - are then blocked in that country, but not globally.
The community standards also do not address false information - Facebook does not prohibit it but it does try to reduce its distribution - or other contentious issues such as use of personal data. 

________________________________________

Facebook may face billions in fines over its Blocking Tag features

A federal judge ruled in favor of a class action lawsuit certification

By

Facebook could face billions of dollars in fines after a federal judge ruled that the company must face a class action lawsuit. The lawsuit alleges that Facebook’s facial recognition features violate Illinois law by storing biometric data without user consent.

The lawsuit involves Facebook’s Tag Suggestions tool, which identifies users in uploaded photos and suggests automatic tagging of your friends. The feature was launched on June 7th, 2011. According to the suit, the complainants allege that Facebook “collects and stores their biometric data without prior notice or consent in violation of their privacy rights.” Illinois’ Biometric Information Privacy Act (BIPA) requires explicit consent before companies can collect biometric data like fingerprints or facial recognition profiles.

It should be noted that Facebook has since also added a more direct notification alerting users to its facial recognition features, but this lawsuit is based on the earlier collection of user data. With the order, millions of the social network’s users could collectively sue the company, with violations of BIPA incurring a fine of between $1,000 to $5,000 each time someone’s image is used without permission.

In the court order, Judge James Donato wrote:

“A class action is clearly superior to individual proceedings here. While not trivial, BIPA’s statutory damages are not enough to incentivize individual plaintiffs given the high costs of pursuing discovery on Facebook’s software and code base and Facebook’s willingness to litigate the case...Facebook seems to believe that a class action is not superior because statutory damages could amount to billions of dollars.”












































The Tag Suggestion feature works in four steps: software tries to detect the faces in uploaded photos. Once detected, Facebook computes a “face signature” — a series of numbers that “represents a particular image of a face” based on your photo — and a “face template” database that the system uses to search face signatures for a match. If the face signature matches, Facebook then suggests the tag. Facebook doesn’t store face signatures and only keeps face templates.

Facebook says its automatic tagging feature detects 90 percent of faces in photos. The lawsuit claims about 76 percent of faces in the photos have face signatures computed. Tag suggestions are available in limited markets. It is primarily offered for users in the US with the option to turn the feature off.

A lawyer for Facebook users, Shawn Williams, told Bloomberg:

“As more people become aware of the scope of Facebook’s data collection and as consequences begin to attach to that data collection, whether economic or regulatory, Facebook will have to take a long look at its privacy practices and make changes consistent with user expectations and regulatory requirements,” he said.

Facebook also launched a new feature back in December that notifies users when someone uploads a photo of them, even if they’re not tagged. In a statement to The Verge, Facebook said, “We are reviewing the ruling. We continue to believe the case has no merit and will defend ourselves vigorously.” Facebook also says it has always been upfront about how the tag function works, and users can easily turn it off if they wish.

  ________________________________________

Facebook points finger at Google and Twitter for data collection

“Other companies suck in your data too,” Facebook explained in many, many words today with a blog post detailing how it gathers information about you from around the web.
Facebook product management director David Baser wrote, “Twitter, Pinterest and LinkedIn all have similar Like and Share buttons to help people share things on their services. Google has a popular analytics service. And Amazon, Google and Twitter all offer login features. These companies — and many others — also offer advertising services. In fact, most websites and apps send the same information to multiple companies each time you visit them.” Describing how Facebook receives cookies, IP address, and browser info about users from other sites, he noted, “when you see a YouTube video on a site that’s not YouTube, it tells your browser to request the video from YouTube. YouTube then sends it to you.”
It seems Facebook is tired of being singled-out. The tacked on “them too!” statements at the end of its descriptions of opaque data collection practices might have been trying to normalize the behavior, but comes off feeling a bit petty.

The blog post also fails to answer one of the biggest lines of questioning from CEO Mark Zuckerberg’s testimonies before Congress last week. Zuckerberg was asked by Representative Ben Lujan about whether Facebook constructs “shadow profiles” of ad targeting data about non-users.
Today’s blog post merely notes that “When you visit a site or app that uses our services, we receive information even if you’re logged out or don’t have a Facebook account. This is because other apps and sites don’t know who is using Facebook. Many companies offer these types of services and, like Facebook, they also get information from the apps and sites that use them.”
Facebook has a lot more questions to answer about this practice, since most of its privacy and data controls are only accessible to users who’ve signed up.












































Judge says class action suit against Facebook over facial recognition can go forward

Whenever a company may be guilty of something, from petty neglect to grand deception, there’s usually a class action lawsuit filed. But until a judge rules that lawsuit legitimate, the threat remains fairly empty. Unfortunately for Facebook, one major suit from 2015 has just been given that critical go-ahead.
The case concerns an Illinois law that prohibits collection of biometric information, including facial recognition data, in the way that Facebook has done for years as part of its photo-tagging systems.
BIPA, the Illinois law, is a real thorn in Facebook’s side. The company has not only been pushing to have the case dismissed, but it has been working to have the whole law changed by supporting an amendment that would defang it — but more on that another time.
(Update: Although Facebook’s own Manger of State Policy Daniel Sachs co-chairs a deregulatory tech council in the Illinois Chamber of Commerce that proposed the amendment, the company maintains that “We have not taken any position on the proposed legislation in Illinois, nor have we suggested language or spoken to any legislators about it.” You may decide for yourself the merit of that claim.)
Judge James Donato in California’s Northern District has made no determination as to the merits of the case itself; first, it must be shown that there is a class of affected people with a complaint that is supported by the facts.
For now, he has found (you can read the order here) that “plaintiffs’ claims are sufficiently cohesive to allow for a fair and efficient resolution on a class basis.” The class itself will consist of “Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011.”

The data privacy double-standard

That said, other tech companies have gotten off light. Whether it’s because Apple and Google aren’t CEO’d by their founders any more, or we’ve grown to see iOS and Android as such underlying platforms that they aren’t responsible for what third-party developers do, scrutiny has focused on Zuckerberg and Facebook.
The Cambridge Analytica scandal emerged from Facebook being unable to enforce its policies that prohibit developers from sharing or selling data they pull from Facebook users. Yet it’s unclear whether Apple and Google do a better job at this policing. And while Facebook let users give their friends’ names and interests to Dr. Aleksandr Kogan, who sold it to Cambridge Analytica, iOS and Android apps routinely ask you to give them your friends’ phone numbers, and we don’t see mass backlash about that.
At least not yet.

 

How Facebook’s Past Data Policy Has Come Back to Haunt It


A video explaining how policies started back in 2007 led to misuse of data by Cambridge Analytica....

Mark Zuckerberg’s Mission: Stay Cool in a Very Hot Seat

Facebook CEO Mark Zuckerberg faces lawmakers this week in what are likely to be contentious hearings about privacy that will be a broader test of how effectively he can guide his social-media giant.

Mark Zuckerberg’s Washington Mission: Stay Cool in a Very Hot Seat
The Facebook chief will be tested as he appears before Congress about privacy issues
By Betsy Morris and
Deepa Seetharaman
April 8, 2018 7:34 p.m. ET

A year ago, Mark Zuckerberg was preparing to deliver the commencement speech at Harvard University. As well as a personal milestone, it was the kind of carefully choreographed, profoundly upbeat event at which he excels.

Facebook To Alert Users Affected By Cambridge Analytica Data Breach

If you were one of the 87 million Facebook users that might have been affected by the social media platform’s recent Cambridge Analytica privacy breach, then you will be getting a detailed message in your news feed starting Monday.

In the wake of the scandal, Facebook said that users who may have had their data shared with Cambridge Analytica will be getting messages this week, according to the Associated Press. In addition, in an effort to do some damage control, all Facebook users will be receiving a notice with a link to see what apps they use and what information they have shared with those apps. They will be given the option to shut off these apps or completely turn off access to third-party apps.

The ongoing Cambridge Analytica scandal has been a thorn in Mark Zuckerberg and Facebook’s side. In March, it was reported that Cambridge Analytica, the data firm backed by Donald Trump supporter Robert Mercer and once steered by former Trump advisor Steve Bannon, obtained personal information from 50 million Facebook users without permission. That data then was used to target voters and influence the 2016 election. Of those affected, Facebook said more than 70 of the 87 million users are in the U.S. with over a million each in the Philippines, Indonesia, and the U.K.
Zuckerberg has since acknowledged that this has been a “huge mistake”. He is set to testify before a joint session of the Senate Judiciary and Commerce Committees on April 10, then appear the next day before the House Energy and Commerce Committee, answering growing questions about data privacy and how Facebook plans to address the problem.
 ___________________________


Facebook suspends another data analytics firm after CNBC discovers it was using tactics like Cambridge Analytica

  • Data analytics firm CubeYou used personality quizzes clearly labeled for "non-profit academic research" to help marketers find customers.
  • One of its quizzes, "You Are What You Like" which also goes by "Apply Magic Sauce," states it is only for "non-profit academic research that has no connection whatsoever to any commercial or profit-making purpose or entity."
  • When CNBC showed Facebook the quizzes and terms, which are similar to the methods used by Cambridge Analytica, Facebook said it was going to suspend CubeYou from the platform to investigate.
















Mark Zuckerberg, chief executive officer and founder of Facebook in July 2017
Getty Images

Mark Zuckerberg, chief executive officer and founder of Facebook in July 2017

Facebook is suspending a data analytics firm called CubeYou from the platform after CNBC notified the company that CubeYou was collecting information about users through quizzes.
CubeYou misleadingly labeled its quizzes "for non-profit academic research," then shared user information with marketers. The scenario is eerily similar to how Cambridge Analytica received unauthorized access to data from as many as 87 million Facebook user accounts to target political marketing.
The company sold data that had been collected by researchers working with the Psychometrics Lab at Cambridge University, similar to how Cambridge Analytica used information it obtained from other professors at the school for political marketing.
The CubeYou discovery suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident. Moreover, the fact that CubeYou was able to mislabel the purpose of the quizzes — and that Facebook did nothing to stop it until CNBC pointed out the problem — suggests the platform has little control over this activity.
Facebook, however, disputed the implication that it can't exercise proper oversight over these types of apps, telling CNBC that it can't control information that companies mislabel. Upon being notified of CubeYou's alleged violations, Facebook said it would suspend all CubeYou's apps until a further audit could be completed.
"These are serious claims and we have suspended CubeYou from Facebook while we investigate them," Ime Archibong, Facebook vice president of product partnerships, said in a statement.
"If they refuse or fail our audit, their apps will be banned from Facebook. In addition, we will work with the UK ICO [Information Commissioner's Office] to ask the University of Cambridge about the development of apps in general by its Psychometrics Centre given this case and the misuse by Kogan," he said. Aleksander Kogan was the researcher who built the quiz used by Cambridge Analytica.
"We want to thank CNBC for bringing this case to our attention," Archibong added.
The revelation comes as Facebook CEO Mark Zuckerberg prepares to answer questions before Congress this week stemming from the Cambridge Analytica scandal. The Senate Commerce and Judiciary committees and the House Energy and Commerce Committee are expected to quiz him on what the site is doing to enhance user privacy, and prevent foreign actors from using Facebook to meddle in future elections.
Since the Cambridge Analytica scandal erupted, Facebook CEO Mark Zuckerberg has claimed personal responsibility for the data privacy leaks, and the company has launched several initiatives to increase user control over their data.

Meet CubeYou

CubeYou boasts on its web site that it uses census data and various web and social apps on Facebook and Twitter to collect personal information. CubeYou then contracts with advertising agenices who want to target certain types of Facebook users for ad campaigns.
CubeYou's site says it has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints.
On a cached version of its web site from March 19, it also said it keeps age, gender, location, work and education, and family and relationship information. It also has likes, follows, shares, posts, likes to posts, comments to posts, check-ins and mentions of brands/celebrities in a post. Interactions with companies are tracked back to 2012 and are updated weekly, the site said.
"This PII information of our panelists is used to verify eligibility (we do not knowingly accept panelists under the age of 18 in our panel), then match and/or fuse other online and offline data sources to enhance their profiles," CubeYou wrote.
The company's web site currently claims it has more than 10 million opted-in panelists, but the cached March 19 version said it had "an unbiased panel of more than 45 million people globally." (Click the images in this story to make them bigger.)

CubeYou collected a lot of this data through online apps that are meant to be entertaining or fun.
An ad agency exec who met with the company confirmed CubeYou said it mostly collects information through quizzes.
According to its web site, one of CubeYou's "most viral apps" is a Facebook quiz created in conjunction with the University of Cambridge called "You Are What You Like." It is meant "to predict a user's personality based on the pages s/he liked on Facebook."
Two versions of this app still were active on Facebook as of Sunday morning. The most recent version of this app has been renamed "Apply Magic Sauce," (YouAreWhatYouLike.com redirects to ApplyMagicSauce.com), and existed on the platform as recently as Sunday morning. Another version still called "You Are What You Like" is also available.

When a user clicks on the "App Terms" link for the Apply Magic Sauce app, it links to a page saying that the information collected through the quiz is intended for "non-exclusive access for research purposes only" and only for "non-profit academic research that has no connection whatsoever to any commercial or profit-making purpose or entity."


After CNBC contacted Facebook for this story, Facebook said there were two previous versions of the app named "You Are What You Like," one created in 2013, which was deleted by the developer, and one submitted later in 2013.
Both of those prior versions had similar disclaimers on Facebook about being used for academic research purposes.
In addition, those prior versions were able to get access to information from friends of the people who took the quiz -- as also happened in the Cambridge Analytica case. Until 2015, Facebook allowed developers to access information on Facebook friends as long as the original app user opted-in, a loophole that expanded the database of personal information considerably.
If the original user still remained opted in, CubeYou could theoretically still access their data to this day.

CubeYou and Cambridge U's response

When reached for comment, CubeYou CEO Federico Treu said the company was involved with developing the app and website, but only worked with Cambridge University from December 2013 to May 2015.
It only collected data from that time and has not had access since June 2015 to data from new people who have taken the quiz, Treu said
He also pointed out that the YouAreWhatYouLike.com website has different -- and looser -- terms of usage than the Facebook terms that CNBC discovered.
The web site says, "the information you submit to You Are What You Like may be stored and used for academic and business purposes, and also disclosed to third parties, including for example (but not limited to) research institutions. Any disclosure will be strictly in an anonymous format, such that the information can never be used to identify you or any other individual user." (Italics added by CNBC.)
He also denied CubeYou has access to friends' data if a user opted in, and said it only connects friends who have opted into the app individually.
Cambridge University said CubeYou's involvement was limited to developing a website.
"We were not aware of Cubeyou's claims on their blog," the University of Cambridge Psychometrics Center said in a statement.
"Having had a look now, several of these appear to be misleading and we will contact them to request that they clarify them. For example, we have not collaborated with them to build a psychological prediction model -- we keep our prediction model secret and it was already built before we started working with them," the institution said.
"Our relationship was not commercial in nature and no fees or client projects were exchanged. They just designed the interface for a website that used our models to give users insight on their [the users'] data. Unfortunately collaborators with the University of Cambridge sometimes exaggerate their connection to Cambridge in order to gain prestige from its academics' work," it added.


'A great place for us to get smart about the consumer'


CubeYou certainly claimed it was able to use this data to target Facebook users, and advertisers seem to have bought the pitch.
CubeYou's web site says its customers include global communications firm Edelman, and sports and entertainment agency Octagon. It also works with advertising agencies including 72 and Sunny (which counts Google, Adidas and Coors Light as clients), the Martin Agency (Discover, Geico, Experian), and Legacy Marketing (L'Oreal, Hilton, TGI Fridays), among others.

The site does not say which CubeYou data was used on which projects, but all agencies' testimonials talk about how CubeYou's data has allow more understanding of potential customers.
"CubeYou is a great place for us to get smart about the consumer," one customer testimonial from Legacy Marketing says. "We primarily use Mintel for our research, but there's very little consumer segmentation and I think that the greatest benefit of a tool like CubeYou is you can get highly nuanced data about demographics, psychographics and interests so easily."

 _______________________

Facebook Data on 87 Million Users May Have Been Improperly Shared

Mark Zuckerberg says he made a ‘huge mistake’ in not focusing on protecting privacy of user data



Facebook Inc. Chief Executive Officer Mark Zuckerberg said Wednesday that he made a “huge mistake” in not focusing more on potential abuse of users’ personal information, as the social-media giant he founded revealed that data breaches were far more extensive than previously known. 


Facebook to Check Groups Behind ‘Issue Ads’

Move aims to prevent the spread of misinformation


Facebook Inc. will soon require that advertisers wanting to run ads on hot-button political issues go through an authorization process first, a move the social network hopes will prevent the spread of misinformation across its platform.

U.S., States Step Up Pressure on Facebook


The attorneys general of 37 states and territories escalate a backlash that has shaken the social-media giant



Government officials ratcheted up pressure Monday on Facebook Inc. over its handling of user data, with federal regulators saying they are investigating the social-media giant’s privacy policies and 37 state attorneys general demanding explanations for its practices.
The Federal Trade Commission, in a statement, signaled that its probe of Facebook is broad. Tom Pahl, a top FTC official, said the commission “takes very seriously” recent reports raising “substantial concerns about the privacy practices of Facebook.”

 

Facebook and Google Face Emboldened Antagonists: Big Advertisers

Latest uproar over voter profiling data follows company demands for more control, more transparency from tech giants



Add to the list of people frustrated with Facebook Inc. and Google a quiet but hugely influential group—the people who pay the bills.
In the past year and a half, the two firms have had one run-in after another with advertisers. Procter & Gamble Co. was among many companies that boycotted Google’s YouTube when they discovered ads were running before extremist and racist videos. 

Facebook is losing control With Big Advertisers Facebook is no longer considers data security platform from muslim minority terrorists groups... And  Facebook Advertisers are pulling out of there facebook contract Accounts.

 ___________________________________

 

Facebook is about to tell users if their data was shared with Cambridge Analytica


Facebook on Monday will begin alerting the 87 million users whose data may have been harvested by Cambridge Analytica.


The company plans to post a link at the top of users' news feeds that will allow them to see which apps are connected to their Facebook accounts and what information those apps are permitted to see.


"As part of this process we will also tell people if their information may have been improperly shared with Cambridge Analytica," the company said last week.
Facebook users will also have the opportunity to use the link to delete apps and prevent them from collecting more information.
Fierce backlash has confronted the company since news broke last month that Cambridge Analytica, a London-based voter analytics group, was able to obtain information about tens of millions of users.
The controversy has renewed questions about whether the world's largest social media platform does enough to protect the sensitive information it collects from users on its platform.
The data Cambridge Analytica obtained was originally collected by University of Cambridge psychology professor Aleksandr Kogan.
He used an app, called "thisisyourdigitallife," which offered a personality test. Facebook users who downloaded the app also gave it permission to collect data on their location, their friends and content they had "liked." The data collection was all completely allowable under Facebook's rules at the time.
Related: Fed up with Facebook? Here's how to protect your data
Facebook has said that Kogan violated its terms of service by passing the information on to Cambridge Analytica, a firm that was later hired to work on President Donald Trump's campaign in 2016.
Facebook banned Kogan and Cambridge Analytica from its platform last month, just before The New York Times published an investigative piece detailing how the data traded hands.
As the controversy swelled, members have used the "download a copy of your Facebook data" feature to get a glimpse of exactly what information the social network has about its users.
Many were rattled to find years worth of private texts traded on the platform's Messenger feature, code for recognizing faces in photographs, and contact information that people thought was tucked away on their cell phones.
Also ahead this week: CEO Mark Zuckerberg will face a grilling from Congress on Tuesday to discuss the data controversy.
—CNN's Charles Riley and Sara Ashley O'Brien contributed to this report.
_____________________________

 

Trey Gowdy wants answers from Mark Zuckerberg

Mark Zuckerberg Facebook’s CEO has been issued a court order to testify before Congress Trey Gowdy wants answers from Mark Zuckerberg The  House Oversight and Government Reform Committee Chairman Trey Gowdy wants to get to the bottom Facebook policies... Advertisers are pulling out of there own contract Accounts....

No More Advertisers on facebook... Now  Facebook is losing control of the narrative and there our own platform.... 

Facebook is losing control With Big AdvertisersFacebook is no longer considers data security platform from muslim minority terrorists groups...

"The committee is aware of numerous reports about the need Answers From Facebook at the allegations of excess cost," Targeting Conservatives Blocking The NRA Tea Party Groups wrote in a letter to Mark Zuckerberg sent Thursday. Gowdy wants a briefing no later than two weeks from Friday to go over the details of Facebook policies

Mark Zuckerberg's name is ringing across Capitol Hill again. Politicians are demanding that the Facebook co-founder and CEO testify to Congress in the wake of the social network's scandal involving a data firm affiliated with the Donald Trump campaign.
Facebook disclosed late Friday that researchers from UK-based Cambridge Analytica had duped the social networking giant and gained access to data from more than 50 million Facebook users through an app called "thisisyourdigitallife," which was then used for political ads during the 2016 presidential election. 
 Facebook said in a statement Friday that it had banned the group, but the political pressure on the massive social network is just beginning. By Monday morning, multiple senators were demanding that Zuckerberg testify before Congress.
An appearance from Zuckerberg could potentially offer answers at a time when Facebook has gotten into hot water over its involvement with the distribution of Russia-made ads and posts on its network. But it's unclear whether it'll happen. 
 While the government has summoned Facebook multiple times, the CEO has never testified on these issues. In the past, Facebook has sent its general counsel Colin Stretch; Monika Bickert, its head of global policy management; and other executives not named Mark Zuckerberg.

'They need to take responsibility'

But the call for Facebook's CEO continues to rise. On Saturday, Sen. Amy Klobuchar tweeted that "Mark Zuckerberg needs to testify before the Senate Judiciary."
The Minnesota Democrat added to her demand on Monday morning, telling NPR's Morning Edition that Zuckerberg needs to speak for Facebook's flaws.
"They have not come before us, they've given it to their lobbyists and their lawyers, and we think that they need to take responsibility for what's going on," Klobuchar said. "I don't know why this CEO, even though he's super famous and has made a lot of money, why he also doesn't have to come before the committee."  
She pointed out that multiple CEOs have testified to Congress in the past, and said the chances of Zuckerberg appearing increase if more politicians call for it. 
Responding to a request for comment, Facebook didn't address whether Zuckerberg would be willing to testify before Congress. 
"We are in the process of conducting a comprehensive internal and external review as we work to determine the accuracy of the claims that the Facebook data in question still exists," said Paul Grewal, Facebook vice president and deputy general counsel. "That is where our focus lies as we remain committed to vigorously enforcing our policies to protect people's information." 
Klobuchar isn't the only one speaking out. The Federal Election Commission on Monday also called for Zuckerberg, as well as Larry Page, CEO of Google parent Alphabet, and Twitter CEO Jack Dorsey to testify at a public hearing set for June 27.
"Your perspective would be of great value to the Commission and to the nation," Ellen Weintraub, the FEC's vice chair, said in her letter to Zuckerberg.
In a joint letter with Klobuchar, Sen. John Kennedy, a Republican from Louisiana, has also called for Zuckerberg to testify before Congress, and asked Senate Judiciary Committee Chairman Sen. Chuck Grassley, a Republican from Iowa, to call for a hearing.
"While this Committee's Subcommittee on Crime and Terrorism convened a hearing with witnesses representing Facebook, Twitter, and Google in October of 2017, we have yet to hear from the leaders of these companies directly," Kennedy and Klobuchar wrote.
The letter also asks that the CEOs from Google and Twitter testify. 
In response to the letter from Klobuchar and Kennedy, a spokeswoman for Grassley said the senator's taking the request under consideration: "At this point, no decision has been made on whether to hold such a hearing or whether it would occur at the full committee or subcommittee level."
Sen. Mark Warner, a Democrat from Virginia, made a similar request on Thursday, before news of the scandal came out. The vice chairman of the Senate Intelligence Committee told Bloomberg "the CEOs owe an obligation."  

Weaponizing psychological profiles

On Tuesday, Warner formally addressed Zuckerberg, writing that Facebook owes the public an explanation.
"It's time for Mr. Zuckerberg and the other CEOs to testify before Congress. The American people deserve answers about social media manipulation in the 2016 election," Warner said in a tweet.
Cambridge Analytica released a statement Monday morning calling the claims against its company "false allegations."
On Monday, Sen. Ron Wyden (D-Ore.), wrote a letter to Zuckerberg, asking for the CEO to explain how Facebook's data was abused by Cambridge Analytica.
"With little oversight -- and no meaningful intervention from Facebook -- Cambridge Analytica was able to use Facebook-developed and marketed tools to weaponize detailed psychological profiles against tens of millions of Americans," Wyden wrote in his letter.
Several senators have added their requests for Zuckerberg to head to Washington, DC, including Sen. Richard Blumenthal, a Democrat from Connecticut. 
"Mark Zuckerberg needs to testify under oath in public before the Judiciary Committee. He owes it to the American people who ought to be deeply disappointed by the conflicting and disparate explanations that have been offered," he told reporters on Monday. Blumenthal added that Zuckerberg should be subpoenaed to appear if he won't come on his own. 
Sens. John Thune (R-SD), Roger Wicker (R-Miss.) and Jerry Moran (R-Kan.) signed a joint letter on Monday as well, demanding a response from Zuckerberg by March 29. 
Rep. Adam Schiff, a Democrat from California on the House Intelligence Committee, called for Cambridge Analytica, as well as Facebook and Zuckerberg, to testify to Congress. 
"I think it would be beneficial to have him come testify before the appropriate oversight committees," he told The Washington Post.
The pressure isn't just coming from DC. The European Union has also launched an investigation into Cambridge Analytica and Facebook, according to a statement from Antonio Tajani, the European Parliament president.  
In the UK, Damian Collins, the chair of Parliament's Digital, Culture, Media and Sport Committee, on Tuesday sent a letter to Zuckerberg to request that he make an appearance to provide "oral evidence" about Facebook's handling of user data.
"It is now time to hear from a senior Facebook executive with sufficient authority to give an accurate account of this catastrophic failure of process," Collins wrote. "Given your commitment at the start of the New Year to 'fixing' Facebook, I hope that this representative will be you."
First published March 19 at 9:28 a.m. PT. 
Update, 10:35 a.m. PT
: Adds a letter from Sen. Ron Wyden.
Update, 11:40 a.m. PT: Adds comment from Facebook.
Update, 12:22 p.m. Adds a response from Cambridge Analytica.
Update, 12:40 p.m. PT
: Adds a comment from a spokeswoman for Grassley,
Update, March 20 at 6:38 a.m. PT
: Adds new statements from Sen. Mark Warner and UK member of Parliament Damian Collins.
Update, March 20 at 7:37 a.m. PT: Adds statements from four senators.

54 comments

How the Cambridge Analytica story became a crisis

By

The longer you consider Facebook’s Cambridge Analytica scandal, the stranger it seems. The basic details of the story, in which a researcher improperly gave away data to the company that became Donald Trump’s data operations team in 2016, have been known for two years. The effectiveness of Cambridge Analytica’s psychographic targeting, which attempted to influence voters by mapping out their Facebook Likes, is highly suspect and likely overstated. The eye-popping number of Facebook profiles said to be involved — 50 million — may turn out to be marketing hype for a company that excels at it.

And yet, revelations from this weekend’s stories in The New York Times and The Guardian continue to batter the company. A bipartisan group of US senators called upon CEO Mark Zuckerberg to testify about how Cambridge Analytica came into possession of so much user data. British authorities promised to investigate the incident as well. On Monday, the company’s stock fell more than 10 percent from the all-time high it set on February 1st. On Tuesday morning, Bloomberg reported that the Federal Trade Commission is investigating the company over its use of personal data.

Cambridge Analytica’s data misuse may ultimately have had little effect in influencing elections here or abroad. But the way Cambridge Analytica obtained its data, and reports that the company held on to the data, despite telling Facebook it had deleted it, have renewed concerns about data privacy on the world’s biggest social network. After learning that data from a researcher’s personality quiz app had improperly been shared with Cambridge Analytica, Facebook took the company at its word that it had purged user profiles: “That to me was the most astonishing thing,” former employee Christopher Wylie told The Guardian. “They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”

Facebook’s lack of enforcement in the face of bad actors, coupled with misuse of its platform on a grand scale, have drawn outrage around the globe. And while Cambridge Analytica is among the most prominent examples to date of how Facebook can be misused, it belongs to a long and growing list. In March alone:


Taken together, these incidents paint a picture of a platform on which crises are developing faster than its minders can address them. A year and a half after Donald Trump’s election sparked a cultural reckoning over social media, Facebook has struggled to contain the fallout. A series of steps taken to remove terrorist propaganda more quickly, and tamp down on the spread of fake news, have produced some encouraging results. But those steps have done little to stop the daily drumbeat of articles about ways in which Facebook is misused around the world, often with disturbing results.

Facebook has typically been quick to apologize when confronted with misuse of the platform, promising it will do better in the future. But the company has taken a defensive posture over the Cambridge Analytica stories, saying that the issue was resolved years ago. But while the company plays defense, a growing number of lawmakers and regulators around the world are promising to investigate the company. This scandal really is different.

The company said Monday that it had hired a forensics team to investigate the company, with Cambridge Analytica’s permission. But before Facebook could complete its audit, the United Kingdom Information Commissioner’s Office ordered that they stop while the office pursues a warrant to mount its own investigation.

It was a dramatic real-world standoff in a case that has until now played out mostly online. And yet the standoff also had an undeniable symbolism: Facebook, attempting to fix its mistakes by itself, found itself at last restrained by the government. As Tuesday began, neither Zuckerberg nor his chief operating officer, Sheryl Sandberg, had made a statement about the Cambridge Analytica revelations. In the brutal months since the election, Facebook has typically been quick to apologize. But after an overwhelming March, it appears that its top executives are speechless.


The Key to Understanding Facebook's Current Crisis
Facebook's current data crisis involving Cambridge Analytica has angered users and prompted government investigations. To understand what's happening now, you have to look back at Facebook's old policies from 2007 to 2014. WSJ's Shelby Holliday explains. Illustration: Laura Kammerman


Facebook FB -3.34% is scrambling to placate users, advertisers and investors following a string of damaging news reports about the misuse of user data.

Last week, Facebook confirmed that Cambridge Analytica, a data firm hired by President Trump’s campaign, had violated the company’s policies when it purchased the data of 50 million users from a researcher who accessed it in 2013. The stock plunged, lawmakers began demanding answers and users threatened to quit the social network altogether.
Cambridge Analytica says it’s launching its own investigation to see if the firm engaged in wrongdoing, and in a Facebook post, CEO Mark Zuckerberg acknowledged that Facebook knew about the policy violation in 2015. Facebook asked the data firm and the researcher to certify that the information had been deleted, but it didn’t notify users at the time.
Now, Facebook is facing a wave of backlash for not doing more to prevent information from being abused. Although the trove of information used by Cambridge Analytica was downloaded before 2015, the year Facebook implemented stricter data policies, it has exposed an ugly truth for the social network: user information that was accessed during the company’s earlier years can still be abused today.
In the video above, we take a look at how Facebook’s lax policies of the past regarding the sharing of data paved the way for the company’s current crisis.

 

After Days of Silence, Facebook’s Mark Zuckerberg Admits to ‘Mistakes’ With User Data


CEO pledges to investigate outsiders’ handling of user information




Facebook Inc. Chief Executive Mark Zuckerberg broke his silence five days into a growing uproar about how outsiders handle Facebook’s user data, admitting mistakes and pledging an investigation but failing to calm some who thought he should have gone further in his remarks.
The growing controversy has shaken the social-media company, knocking its stock price lower and prompting renewed calls for governments to better regulate technology businesses that hold enormous quantities of information about their users.


Facebook CEO Mark Zuckerberg admits mistakes, pledges fixes after data scandal

 


Breaking five days of silence, Facebook CEO Mark Zuckerberg admitted mistakes and outlined steps to protect user data in light of a privacy scandal involving a Trump-connected data-mining firm.
Zuckerberg said Wednesday that Facebook has a "responsibility" to protect its users' data and if it fails, "we don't deserve to serve you."
Advertisement
But Zuckerberg stopped short of apologizing.
And he wrote "what happened" instead of "what we did," leaving Facebook one step removed from responsibility.
Zuckerberg and Facebook's No. 2 executive, Sheryl Sandberg, have been quiet since news broke Friday that Cambridge Analytica may have used data improperly obtained from roughly 50 million Facebook users to try to sway elections.
Facebook shares have dropped some 8 percent since the revelations  were first published, raising questions about whether social media sites are violating users' privacy.
Even before the scandal broke, Facebook has already taken the most important steps to prevent a recurrence, Zuckerberg said. For example, in 2014, it reduced access outside apps had to user data. However, some of the measures didn't take effect until a year later, allowing Cambridge to access the data in the intervening months.
Zuckerberg acknowledges that there is more to do.
In a Facebook post on Wednesday, Zuckerberg said it will ban developers who don't agree to an audit. An app's developer will no longer have access to data from people who haven't used that app in three months. Data will also be generally limited to user names, profile photos and email, unless the developer signs a contract with Facebook and gets user approval.
In a separate post, Facebook said it will inform people whose data was misused by apps. And in the future, when it bans an app for misusing people's data, Facebook promises to tell everyone who used it.
Facebook first learned of this breach of privacy more than two years ago, but hadn't mentioned it publicly until Friday.
The company it is also "building a way" for people to know if their data was accessed by "This Is Your Digital Life," though there is no way to do this at the moment. The app is the psychological profiling quiz that researcher Aleksandr Kogan created and paid about 270,000 people to take part in. Cambridge Analytica later obtained data from the app for about 50 million Facebook users, because it also vacuumed up data on people's friends.
Facebook didn't say how it would inform users if their data was compromised. But it could look similar to the page it set up for users to see if they liked or followed accounts set up by the Russian troll farm Internet Research Agency, accused of meddling with the 2016 presidential elections. This tool, however, doesn't show users if they merely saw —or even "liked"— posts from those pages.
Earlier Wednesday, Kogan described himself as a scapegoat and said he had no idea his work would be used in Donald Trump's 2016 presidential campaign.
Kogan, a psychology researcher at Cambridge University, told the BBC that both Facebook and Cambridge Analytica have tried to place the blame on him for violating the social media platform's terms of service, even though Cambridge Analytica ensured him that everything he did was legal.
"Honestly, we thought we were acting perfectly appropriately," Kogan said. "We thought we were doing something that was really normal."
Cambridge has shifted the blame to Kogan, which the firm described as a contractor.
Kogan said Cambridge Analytica approached him to gather Facebook data and provided the legal advice that this was "appropriate."
"One of the great mistakes I did here was I just didn't ask enough questions," he said. "I had never done a commercial project; I didn't really have any reason to doubt their sincerity. That's certainly something I strongly regret now."
He said the firm paid some $800,000 for the work, but it went to participants in the survey.
"My motivation was to get a data set I could do research on; I have never profited from this in any way personally," he said.
Authorities in Britain and the United States are investigating.
Sandy Parakilas, who worked in data protection for Facebook in 2011 and 2012, told a U.K. parliamentary committee Wednesday that the company was vigilant about its network security but lax when it came to protecting users' data.
He said personal data including email addresses and in some cases private messages was allowed to leave Facebook servers with no real controls on how the data was used after that.
"The real challenge here is that Facebook was allowing developers to access the data of people who hadn't explicitly authorized that," he said, adding that the company had "lost sight" of what developers did with the data.
Meanwhile, the top prosecutors in Massachusetts and New York have sent a letter to Facebook demanding the social media giant protect its users' private information.
Massachusetts Attorney General Maura Healey and New York Attorney General Eric Schneiderman launched a joint investigation Saturday after reports that British data analysis firm Cambridge Analytica captured information from 50 million Facebook users without their consent.
Healey said residents in her state "deserve answers immediately," from Facebook and Cambridge Analytica about what data was shared and how it was allowed to happen. Her office said it has been in touch with Facebook about the investigation.
Schneiderman said that if the company violated New York law "we will hold them accountable."
___
Danica Kirka and Gregory Katz reported from London.

Kansas father facing deportation reunites with family

 

 DRUDGE REPORT 2018® Reports facebook twitter youtube are all blocking Conservatives News Feed on all social media sites  has been block

List OF 10 Violation Laws that Youtube Twitter  and Facebook has Broken For the last 10 years 

1)  Youtube Twitter and Facebook Is In Violation Freedom OF Press

2) Youtube Twitter and Facebook Is In Violation Freedom OF Religion

 3) Youtube Twitter and Facebook Is Not Blocking isis terrorist Groups

4) Youtube Twitter and Facebook Is Facing Harassment Charges 

5)  Youtube Twitter and Facebook Is In Violation Not Blocking Scammers

6) Youtube Twitter and Facebook Is In Violation OF Major Constitutional Rights 

7) Youtube Twitter and Facebook Is In Violation Targeting Tea Party Patriots  Conservative Tea  Party Groups

8)Youtube Twitter and Facebook Is In Violation OF Cyber-Bulling  

9) Youtube Twitter and Facebook Is In Violation OF Speech Messages Have Been Block OnYoutube Twitter Facebook  

10) Youtube Twitter and Facebook Is In Violation OF Legal And Law Enforcement Laws People Posting Death Threat Messages On Facebook Without Being Blocked

 

Facebook Workers: We Routinely Suppressed Conservative News




























































Here is how you can circumvent Facebook’s block on Jihad Watch

Most Americans today get their news through Facebook, and so the Leftist authoritarian Mark Zuckerberg’s words here are ominous: “For example, take the Wall Street Journal or New York Times. Even if you don’t read them or don’t agree with everything they write, most people have confidence that they’re high quality journalism. On the flip side, there are blogs that have intense followings but are not widely trusted beyond their core audience. We will show those publications somewhat less.”
That means us, friends, however unjustified that lack of trust may be, and others whom the hard-Left censors at Facebook deem unworthy of your attention. Foes of jihad terror are on their block list, but here is a way you can adjust your settings so that you still get the news we report here:

“Facebook’s Changing Your Newsfeed. Here’s How To Make Sure You Still See Posts By Your Favorite Sites.,” by James Barrett, Daily Wire, January 26, 2018 (thanks to the Geller Report):

Facebook recently announced that it will be making major changes to its newsfeed that will significantly impact what users see. The emphasis, CEO Mark Zuckerberg explained, will be on posts from users’ friends and family, as well as what Facebook calls “trusted sources.”
Those “trusted sources,” however, are not necessarily going to be the same pages and news sites that users follow; rather, they are sources that Facebook designates as “trusted” through what it says will be rankings produced by “a diverse and representative” sample of Facebook users (see full post below). Which sources are “trusted sources” and which are not, is unclear. Sources not deemed “trusted” — even those you choose to follow — will get buried or de-emphasized in your newsfeed.
But there’s a way to make sure that Facebook does not prevent you from seeing posts by your favorite sites. Below are the instructions for how to update your Facebook settings so that your newsfeed prioritizes posts by sites you follow, like The Daily Wire, rather than letting the platform determine what you get to see.
1. On your Facebook homepage, click the drop-down arrow on the top right of the page and select “News Feed Preferences” (usually found near the bottom of the listed options).

2. Select “Prioritize who to see first” (usually the first option listed).
3. Change the view options to show “Pages only,” so it’s easier to find the pages for the sites you prefer to see in your newsfeed. Then simply select the pages you wish to see first in your newsfeed.

Another way to protect your newsfeed: Go to the Facebook page of the site you want to follow, click the “Following” drop-down arrow, and check the “See First” option “In Your News Feed.”

After you’ve protected your newsfeed to make sure you’re still seeing posts from your favorite sources, the other extremely important thing you can do to make sure those sources don’t get buried by Facebook is share posts with friends and family.
Here is an excerpt of the message posted by Zuckerberg explaining the platform’s new emphasis on promoting “trusted” news sources in order to protect against “sensationalism, misinformation and polarization” (full post below):

There’s too much sensationalism, misinformation and polarization in the world today. Social media enables people to spread information faster than ever before, and if we don’t specifically tackle these problems, then we end up amplifying them. That’s why it’s important that News Feed promotes high quality news that helps build a sense of common ground.
The hard question we’ve struggled with is how to decide what news sources are broadly trusted in a world with so much division. We could try to make that decision ourselves, but that’s not something we’re comfortable with. We considered asking outside experts, which would take the decision out of our hands but would likely not solve the objectivity problem. Or we could ask you — the community — and have your feedback determine the ranking.
We decided that having the community determine which sources are broadly trusted would be most objective.
Here’s how this will work. As part of our ongoing quality surveys, we will now ask people whether they’re familiar with a news source and, if so, whether they trust that source. The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don’t follow them directly. (We eliminate from the sample those who aren’t familiar with a source, so the output is a ratio of those who trust the source to those who are familiar with it.)
This update will not change the amount of news you see on Facebook. It will only shift the balance of news you see towards sources that are determined to be trusted by the community.

 

Facebook Does Not Believe In 

U.S. Legal Process Requirements

Facebook Does Not disclose account records solely in accordance with our terms of service and applicable law, including the federal Stored Communications Act ("SCA"), 18 U.S.C. Sections 2701-2712. Under U.S. law:

  • A valid subpoena issued in connection with an official criminal investigation is required to compel the disclosure of basic subscriber records (defined in 18 U.S.C. Section 2703(c)(2)), which may include: name, length of service, credit card information, email address(es), and a recent login/logout IP address(es), if available.
  • A court order issued under 18 U.S.C. Section 2703(d) is required to compel the disclosure of certain records or other information pertaining to the account, not including contents of communications, which may include message headers and IP addresses, in addition to the basic subscriber records identified above.
  • A search warrant issued under the procedures described in the Federal Rules of Criminal Procedure or equivalent state warrant procedures upon a showing of probable cause is required to compel the disclosure of the stored contents of any account, which may include messages, photos, videos, timeline posts, and location information.
  • We interpret the national security letter provision as applied to Facebook to require the production of only 2 categories of information: name and length of service.

     Facebook Message
     The email address provided is not a government issued or law enforcement email address. Please try again with a valid email address.
    Email dawns@lapd.gov
    Enter your email address to receive a unique link to the Law Enforcement Online Request System. The link will give you access to the system for one hour.

     _________________________________

    German court rules Facebook data use, privacy settings illegal



























































    German court rules Facebook data use, privacy settings illegal
    A regional court in Germany has found Facebook’s default privacy settings and use of personal data it collects from users to be in violation of consumer protection laws. The Berlin court found that Facebook did not provide users enough information for them to understand how their data is being collected and that any agreements users signed did not constitute meaningful consent. VZBV, the German privacy advocacy group that filed the suit, argued that data collection agreements that Facebook users are automatically opted into don’t give users enough notice about what they’re agreeing to.
    “Facebook hides default settings that are not privacy-friendly in its privacy center and does not provide sufficient information about it when users register,” said Heiko Dünkel, a litigation policy officer at the VZBV. “This does not meet the requirement for informed consent.” The court ruled that several Facebook default data sharing settings did not count as consent from the user. It also found clauses in Facebook’s terms of service to be invalid, including its policy of requiring users to use their “authentic names” on the website. Facebook told The Guardian that it intended to appeal the decision. “We are working hard to ensure that our guidelines are clear and easy to understand and that the services offered by Facebook are in full accordance with the law,” the company said in a statement. The social media company is also dealing with scrutiny from the national government in Germany and the European Union over its data collection and privacy policies. Facebook had previously said that it will be making significant changes to its privacy settings to conform with the EU's new General Data Protection Regulation, laws covering data use across the EU. “We’re rolling out a new privacy center, globally, that will put the core privacy settings for Facebook in one place and make it much easier for people to manage their data,” Facebook COO Sheryl Sandberg said of the changes in January.
    Where can I find my settings? Computer Help https://www.facebook.com/help/www/166986580029611?helpref=platform_switcher#" label="Mobile Help" role="button" tabindex="0">Mobile Helpadditional tabs menu
To find your settings, click account settings in the top right corner of your screen and select Settings. From here, you can select the option in the left sidebar that contains the settings you want to adjust:
General: Edit the basics like your name, email or password
Security and Login: Turn on alerts and approvals to keep your account secure
Privacy: Adjust who can see your stuff and who can look you up
Timeline and Tagging: Set who can see your timeline and how to manage photo tagging
Blocking: Manage who and what you block Language: Select the language that you want to use for Facebook
Was this information helpful?

I have two accounts. Can I merge them?

  Pamela Geller
Here's the problem you fail to identify. @nytimes is hyper-partisan. @snopes is hyper-partisan. These are not objective, fair sources. Your piece is one long whine about conservatives having free, fair access to the public square (@facebook) under the guise of fake news&clickbait https://twitter.com/WIRED/status/963021236212260864 























































TRY THIS! Facebook’s Changing Your Newsfeed. Here’s How To Make Sure You Still See Posts By Your Favorite Sites


19

  • Facebook
  • social media networkname" Twitter social media overlay"
  • Google+
  • Geller Report readers have long suffered the suppression and oppression of social media speech police. The Geller Report feed has been blocked from your newsfeed. There is a work-around. Try this. Go to my Facebook page, here click the “Following” drop-down arrow, and check the “See First option “In Your News Feed." "https://gellerreport.com Facebook’s Changing Your Newsfeed. Here’s How To Make Sure You Still See Posts By Your Favorite Sites. By James Burnett, Daily Wire: Facebook recently announced that it will be making major changes to its newsfeed that will significantly impact what users see. The emphasis, CEO Mark Zuckerberg explained, will be on posts from users’ friends and family, as well as what Facebook calls “trusted sources.” Those “trusted sources,” however, are not necessarily going to be the same pages and news sites that users follow; rather, they are sources that Facebook designates as “trusted” through what it says will be rankings produced by “a diverse and representative” sample of Facebook users (see full post below). Which sources are “trusted sources” and which are not, is unclear. Sources not deemed “trusted” — even those you choose to follow — will get buried or de-emphasized in your newsfeed. But there’s a way to make sure that Facebook does not prevent you from seeing posts by your favorite sites. Below are the instructions for how to update your Facebook settings so that your newsfeed prioritizes posts by sites you follow, like The Daily Wire, rather than letting the platform determine what you get to see. 1. On your Facebook homepage, click the drop-down arrow on the top right of the page and select “News Feed Preferences” (usually found near the bottom of the listed options). 2. Select “Prioritize who to see first” (usually the first option listed). 3. Change the view options to show “Pages only,” so it’s easier to find the pages for the sites you prefer to see in your newsfeed. Then simply select the pages you wish to see first in your newsfeed. Change the view options to show “Pages only,” so it’s easier to find the pages for the sites you prefer to see in your newsfeed. Then simply select the pages you wish to see first in your newsfeed. Another way to protect your newsfeed: Go to the Facebook page of the site you want to follow, click the “Following” drop-down arrow, and check the “See First” option “In Your News Feed.”After you’ve protected your newsfeed to make sure you’re still seeing posts from your favorite sources, the other extremely important thing you can do to make sure those sources don’t get buried by Facebook is share posts with friends and family. Here is an excerpt of the Filed to: Facebook
    2.6M
    1.7K91





























































Illustration: Jim Cooke
Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project. This individual says that workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users. "https://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006#iconset-zoom-in">Several former Facebook “news curators,” as they were known internally, also told Gizmodo that they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all. The former curators, all of whom worked as contractors, also said they were directed not to include news about Facebook itself in the trending module. In other words, Facebook’s news section operates like atraditional newsroom, reflecting the biases of its workers and the institutional imperatives of the corporation. Imposing human editorial values onto the lists of topics an algorithm spits out is by no means a bad thing—but it is in stark contrast to the company’s claims that the trending module simply lists “topics that have recently become popular on Facebook.”
These new allegations emerged after Gizmodo last week revealed details about the inner workings of Facebook’s trending news team—a small group of young journalists, primarily educated at Ivy League or private East Coast universities, who curate the “trending” module on the upper-right-hand corner of the site. As we reported last week, curatorshave access to a ranked list of trending topics surfaced by Facebook’s algorithm, which prioritizes the stories that should be shown to Facebook users in the trending section. The curators write headlines and summaries of each topic, and include links to news sites. The section, which launched in 2014, constitutes some of the most powerful real estate on the internet and helps dictate what news Facebook’s users—167 million in the US alone—are reading at any given moment.  “I believe it had a chilling effect on conservative news.” “Depending on who was on shift, things would be blacklisted or trending,” said the former curator. This individual asked to remain anonymous, citing fear of retribution from the company. The former curator is politically conservative, one of a very small handful of curators with such views on the trending team. “I’d come on shift and I’d discover that CPAC or Mitt Romney or Glenn Beck or popular conservative topics wouldn’t be trending because either the curator didn’t recognize the news topic or it was like they had a bias against Ted Cruz.”

Want to Know What Facebook Really Thinks of Journalists? Here's What Happened When It Hired Some.

Depending on whom you ask, Facebook is either the savior or destroyer of journalism in our time. An …

Read more

Another former curator agreed that the operation had an aversion to right-wing news sources. “It was absolutely bias. We were doing it subjectively. It just depends on who the curator is and what time of day it is,” said the former curator. “Every once in awhile a Red State or conservative news source would have a story. But we would have to go and find the same story from a more neutral outlet that wasn’t as biased.” Stories covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories. former curators interviewed by Gizmodo denied consciously suppressing conservative news, and we were unable to determine if left-wing news topics or sources were similarly suppressed. The conservative curator described the omissions as a function of his colleagues’ judgements; there is no evidence that Facebook management mandated or was even aware of any political bias at work. Managers on the trending news team did, however, explicitly instruct curators to artificially manipulate the trending module in a different way: When users weren’t reading stories that management viewed as important, several former workers said, curators were told to put them in the trending news feed anyway. Several former curators described using something called an “injection tool” to push topics into the trending module that weren’t organically being shared or discussed enough to warrant inclusion—putting the headlines in front of thousands of readers rather than allowing stories to surface on their own. In some cases, after a topic was injected, it actually became the number one trending news topic on Facebook. “We were told that if we saw something, a news story that was on the front page of these ten sites, like CNN, the New York Times, and BBC, then we could inject the topic,” said one former curator. “If it looked like it had enough news sites covering the story, we could inject it—even if it wasn’t naturally trending.” Sometimes, breaking news would be injected because it wasn’t attaining critical mass on Facebook quickly enough to be deemed “trending” by the algorithm. Former curators cited the disappearance of Malaysia Airlines flight MH370 and the Charlie Hebdo attacks in Paris as two instances in which non-trending stories were forced into the module. Facebook has "https://www.theguardian.com struggled to compete with Twitter when it comes to delivering real-time news to users; the injection tool may have been designed to artificially correct for that deficiency in the network. “We would get yelled at if it was all over Twitter and not on Facebook,” one former curator said.
























































In other instances, curators would inject a story—even if it wasn’t being widely discussed on Facebook—because it was deemed important for making the network look like a place where people talked about hard news. “People stopped caring about Syria,” one former curator said. “[And] if it wasn’t trending on Facebook, it would make Facebook look bad.” That same curator said the Black Lives Matter movement was also injected into Facebook’s trending news module. “Facebook got a lot of pressure about not having a trending topic for Black Lives Matter,” the individual said. “They realized it was a problem, and they boosted it in the ordering. They gave it preference over other topics. When we injected it, everyone started saying, ‘Yeah, now I’m seeing it as number one’.” This particular injection is especially noteworthy because the #BlackLivesMatter movement originated on Facebook, and the ensuing media coverage of the movement often noted its powerful social media presence.(In February, CEO Mark Zuckerberg expressed his support for the movement in an internal memo chastising Facebook employees for defacing Black Lives Matter slogans on the company’s internal “signature wall.”) When stories about Facebook itself would trend organically on the network, news curators used less discretion—they were told not to include these stories at all. “When it was a story about the company, we were told not to touch it,” said one former curator. “It had to be cleared through several channels, even if it was being shared quite a bit. We were told that we should not be putting it on the trending tool.”
(The curators interviewed for this story worked for Facebook across a timespan ranging from mid-2014 to December 2015.)
“We were always cautious about covering Facebook,” said another former curator. “We would always wait to get second level approval before trending something to Facebook. Usually we had the authority to trend anything on our own [but] if it was something involving Facebook, the copy editor would call their manager, and that manager might even call their manager before approving a topic involving Facebook.”
Gizmodo reached out to Facebook for comment about each of these specific claims email and phone, but did not receive a response.Several former curators said that as the trending news algorithm improved, there were fewer instances of stories being injected. They also said that the trending news process was constantly being changed, so there’s no way to know exactly how the module is run now. But the revelations undermine any presumption of Facebook as a
neutral pipeline for news, or the trending news module as an algorithmically-driven list of what people are actually talking about. Rather, Facebook’s efforts to play the news game reveal the company to be much like the news outlets it is rapidly driving toward irrelevancy: a select group of professionals with vaguely center-left sensibilities. It just happens to be one that poses as a neutral reflection of the vox populi, has the power to influence what billions of users see, and openly discusses whether it should use that power to influence presidential elections. “It wasn’t trending news at all,” said the former curator who logged conservative news omissions. “It was an opinion.”[Disclosure: Facebook has launched a program that pays publishers, including theNew York Times and Buzzfeed, to produce videos for its Facebook Live tool. Gawker Media, Gizmodo’s parent company, recently joined that program.]Several hours after this report was published, Gizmodo editors started seeing it as a topic in Facebook’s trending section. Gizmodo’s video was posted under the topic but the “Top Posts” were links to RedState.com and the Faith and Freedom Coalition.  
 


 Is Facebook Censoring Conservative News And  How Social Media Controls What We See ?

, I write about the broad intersection of data and society. Opinions expressed by Forbes Contributors are their own.
Mark Zuckerberg at the Mobile World Congress walking by audience members immersed in virtual reality and entirely oblivious to him walking beside them. (Image via Facebook)

Gizmodo’s Michael Nunez is out today with a sensational story in which former Facebook employees claim they regularly censored the platform’s “trending” news section to eliminate stories about conservative topics that were organically trending, blacklisted certain news outlets from appearing and artificially “injected” stories they felt were important but that the site’s users were not discussing or clicking on. This comes a month after Nunez published a leaked internal Facebook poll that asked “What responsibility does Facebook have to help prevent President Trump in 2017?” In short, as the curtain has been lifted on Facebook’s magical trending algorithm, the mythical unbiased algorithm powering what users see on the site is seen to be less machine and more biased human curator. Yet, given Facebook’s phenomenal reach across the world and the role it increasingly plays as primary news gateway for more and more people, the notion that it is systematically curating what its users see in an unalgorithmic and partisan way raises alarm bells on the future of how we access and consume information.
Ryan Merkley, CEO of Creative Commons wrote in Wired last month that “If the Web has achieved anything, it’s that it’s eliminated the need for gatekeepers, and allowed creators—all of us—to engage directly without intermediaries, and to be accountable directly to each other.” Yet, such a rosily optimistic view of the web’s impact on society seems to ignore the mounting evidence that the web is in fact merely coalescing around a new set of gatekeepers. As Jack Mirkinson wrote for Salon earlier this month, “the internet, that supposed smasher of gates and leveler of playing fields, has coalesced around a mere handful of mega-giants in the space of just a couple of decades. The gates didn’t really come down. The identities of the gatekeepers just changed. Google, Facebook, Apple, Amazon: How many people can really say that some portion of every day of their lives isn’t mediated by at least one of these companies? ... It seems that, at least for the moment, we are destined to live in the world that they create—and that includes everyone in the media business.”
Far from democratizing how we access the world’s information, the web has in fact narrowed those information sources. Much as large national chains and globalization have replaced the local mom-and-pop shop with the megastore and local craftsmanship with assembly line production, the internet is centralizing information access from a myriad websites and local newspapers and radio/television shows to single behemoth social platforms that wield universal global control over what we consume.

Indeed, social media platforms appear to increasingly view themselves no longer as neural publishing platforms but rather as active mediators and curators of what we see. This extends even to new services like messaging. David Marcus, Facebook’s Vice President of Messaging recently told Wired: “Unlike email where there is no one safeguarding the quality and the quantity of the stuff you receive, we’re here in the middle to protect the quality and integrity of your messages and to ensure that you’re not going to get a lot of stuff you don’t want.” In short, Facebook wants to act as an intelligent filter onto what we see of the world. The problem is that any filter by design must emphasize some content and views at the expense of others.
In the case of Facebook, the new revelations are most concerning because they go to the very heart of how these new social platforms shape what we understand about the world. It is one thing for a platform to announce it will delete posts that promote terrorism or that threaten another user with bodily harm, but to silently and systematically filter what users see through a distinct partisan lens, especially with regards to news reporting, adds a frightening dimension to just how much power a handful of Silicon Valley companies now wield over what we see online.
Ben Rhodes, deputy national security advisor for strategic communication at the White House recently raised eyebrows when he remarked on the Internet’s impact on news reporting by saying “All these newspapers used to have foreign bureaus. Now they don’t. They call us to explain to them what’s happening in Moscow and Cairo. Most of the outlets are reporting on world events from Washington. The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns. That’s a sea change. They literally know nothing.” In the interview he went on to claim that the White House is able to use social media to fill that information gap, putting its own talking points and interpretations out on social media which he claims are then mindlessly parroted by the media. What happens when Facebook itself goes further and helps promote some of these viewpoints to its users while censoring others?
The notion that a social media platform would systematically censor particular viewpoints or news has unique import in a presidential election year. As The Hill put it, “Facebook is a key outreach, recruiting and advertising tool for presidential candidates, and it is a primary distribution hub for the political news media. It is also where much of the political debate between voters is taking place,” accounting for over 650 million interactions regarding political candidates in a single month this year. The notion that Facebook might be systematically altering what its users see to promote particular views is troubling at best.
 The Patriot Conservative News Tea Party Network
















      liberalism + Socialism = Terrorism 
                      Thanks for your Support

                                

 © All copyrights reserved By Patcnews
     liberalism + Socialism = Terrorism














 

Lauren Feiner From News Associate Press Reports  Apple hits back and blocks Facebook from running its internal iOS apps

facebook’s internal iOS apps simply don’t launch anymore



LLC 501C- 4 UCC 1-308.ALL RIGHTS RESERVED WITHOUT PREJUDICE



Content and Programming Copyright 2018 By Patcnews The Patriot Conservative News Tea Party Network © LLC UCC 1-308.ALL RIGHTS RESERVED WITHOUT PREJUDICE All copyrights reserved By Patcnews The Patriot Conservative News Tea Party Network Copyright 2018 CQ-Roll Call, Inc. All materials herein are protected by United States copyright law and may not be reproduced, distributed, transmitted, displayed, published or broadcast without the prior written permission of CQ-Roll Call. You may not alter or remove any trademark, copyright or other notice from copies of the content.  © All Copyrights Reserved By Patcnews The Patriot Conservative News Tea Party Network

No comments:

Post a Comment