Barbara Ortutay – The Virginian-Pilot https://www.pilotonline.com The Virginian-Pilot: Your source for Virginia breaking news, sports, business, entertainment, weather and traffic Tue, 17 Sep 2024 12:27:22 +0000 en-US hourly 30 https://wordpress.org/?v=6.6.2 https://www.pilotonline.com/wp-content/uploads/2023/05/POfavicon.png?w=32 Barbara Ortutay – The Virginian-Pilot https://www.pilotonline.com 32 32 219665222 Instagram introduces teen accounts, other sweeping changes to boost child safety online https://www.pilotonline.com/2024/09/17/instagram-introduces-teen-accounts-other-sweeping-changes-to-boost-child-safety-online/ Tue, 17 Sep 2024 12:26:15 +0000 https://www.pilotonline.com/?p=7371607&preview=true&preview_id=7371607 Instagram is introducing separate teen accounts for those under 18 as it tries to make the platform safer for children amid a growing backlash against how social media affects young people’s lives.

Beginning Tuesday in the U.S., U.K., Canada and Australia, anyone under under 18 who signs up for Instagram will be placed into a teen account and those with existing accounts will be migrated over the next 60 days. Teens in the European Union will see their accounts adjusted later this year.

Meta acknowledges that teenagers may lie about their age and says it will require them to verify their ages in more instances, like if they try to create a new account with an adult birthday. The Menlo Park, California company also said it is building technology that proactively finds teen accounts that pretend to be grownups and automatically places them into the restricted teen accounts.

The teen accounts will be private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to. “Sensitive content,” such as videos of people fighting or those promoting cosmetic procedures, will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10 p.m. until 7 a.m.

While these settings will be turned on for all teens, 16 and 17-year-olds will be able to turn them off. Kids under 16 will need their parents’ permission to do so.

“The three concerns we’re hearing from parents are that their teens are seeing content that they don’t want to see or that they’re getting contacted by people they don’t want to be contacted by or that they’re spending too much on the app,” said Naomi Gleit, head of product at Meta. “So teen accounts is really focused on addressing those three concerns.”

The announcement comes as the company faces lawsuits from dozens of U.S. states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

In the past, Meta’s efforts at addressing teen safety and mental health on its platforms have been met with criticism that the changes don’t go far enough. For instance, while kids will get a notification when they’ve spent 60 minutes on the app, they will be able to bypass it and continue scrolling.

That’s unless the child’s parents turn on “parental supervision” mode, where parents can limit teens’ time on Instagram to a specific amount of time, such as 15 minutes.

With the latest changes, Meta is giving parents more options to oversee their kids’ accounts. Those under 16 will need a parent or guardian’s permission to change their settings to less restrictive ones. They can do this by setting up “parental supervision” on their accounts and connecting them to a parent or guardian.

Nick Clegg, Meta’s president of global affairs, said last week that parents don’t use the parental controls the company has introduced in recent years.

Gleit said she thinks teen accounts will create a “big incentive for parents and teens to set up parental supervision.”

“Parents will be able to see, via the family center, who is messaging their teen and hopefully have a conversation with their teen,” she said. “If there is bullying or harassment happening, parents will have visibility into who their teen’s following, who’s following their teen, who their teen has messaged in the past seven days and hopefully have some of these conversations and help them navigate these really difficult situations online.”

U.S. Surgeon General Vivek Murthy said last year that tech companies put too much on parents when it comes to keeping children safe on social media.

“We’re asking parents to manage a technology that’s rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage,” Murthy said in May 2023.

]]>
7371607 2024-09-17T08:26:15+00:00 2024-09-17T08:27:22+00:00
Meta, TikTok and other social media CEOs testify in heated Senate hearing on child exploitation https://www.pilotonline.com/2024/01/30/meta-tiktok-and-other-social-media-ceos-testify-in-heated-senate-hearing-on-child-exploitation/ Tue, 30 Jan 2024 23:04:48 +0000 https://www.pilotonline.com/?p=6437016&preview=true&preview_id=6437016 By BARBARA ORTUTAY and HALELUYA HADERO (Associated Press)

Sexual predators. Addictive features. Suicide and eating disorders. Unrealistic beauty standards. Bullying. These are just some of the issues young people are dealing with on social media — and children’s advocates and lawmakers say companies are not doing enough to protect them.

On Wednesday, the CEOs of Meta, TikTok, X and other social media companies went before the Senate Judiciary Committee to testify at a time when lawmakers and parents are growing increasingly concerned about the effects of social media on young people’s lives.

The hearing began with recorded testimony from kids and parents who said they or their children were exploited on social media. Throughout the hourslong event, parents who lost children to suicide silently held up pictures of their dead kids.

“They’re responsible for many of the dangers our children face online,” Senate Majority Whip Dick Durbin, who chairs the committee, said in opening remarks. “Their design choices, their failures to adequately invest in trust and safety, their constant pursuit of engagement and profit over basic safety have all put our kids and grandkids at risk.”

In a heated question and answer session with Mark Zuckerberg, Republican Missouri Sen. Josh Hawley asked the Meta CEO if he has personally compensated any of the victims and their families for what they have been through.

“I don’t think so,” Zuckerberg replied.

“There’s families of victims here,” Hawley said. “Would you like to apologize to them?”

Zuckerberg stood, turned away from his microphone and the senators, and directly addressed the parents in the gallery.

“I’m sorry for everything you have all been through. No one should go through the things that your families have suffered,” he said, adding that Meta continues to invest and work on “industrywide efforts” to protect children.

But time and time again, children’s advocates and parents have stressed that none of the companies are doing enough.

One of the parents who attended the hearing was Neveen Radwan, whose teenage daughter got sucked in to a “black hole of dangerous content” on TikTok and Instagram after she started looking at videos on healthy eating and exercise at the onset of the COVID lockdowns. She developed anorexia within a few months and nearly died, Radwan recalled.

“Nothing that was said today was different than what we expected,” Radwan said. “It was a lot of promises and a lot of, quite honestly, a lot of talk without them really saying anything. The apology that he made, while it was appreciated, it was a little bit too little, too late, of course.”

But Radwan, whose daughter is now 19 and in college, said she felt a “significant shift” in the energy as she sat through the hearing, listening to the senators grill the social media CEOs in tense exchanges.

“The energy in the room was, very, very palpable. Just by our presence there, I think it was very noticeable how our presence was affecting the senators,” she said.

Hawley continued to press Zuckerberg, asking if he’d take personal responsibility for the harms his company has caused. Zuckerberg stayed on message and repeated that Meta’s job is to “build industry-leading tools” and empower parents.

“To make money,” Hawley cut in.

South Carolina Sen. Lindsay Graham, the top Republican on the Judiciary panel, echoed Durbin’s sentiments and said he’s prepared to work with Democrats to solve the issue.

“After years of working on this issue with you and others, I’ve come to conclude the following: Social media companies as they’re currently designed and operate are dangerous products,” Graham said.

The executives touted existing safety tools on their platforms and the work they’ve done with nonprofits and law enforcement to protect minors.

Snapchat broke ranks ahead of the hearing and is backing a federal bill that would create a legal liability for apps and social platforms that recommend harmful content to minors. Snap CEO Evan Spiegel reiterated the company’s support on Wednesday and asked the industry to back the bill.

TikTok CEO Shou Zi Chew said the company is vigilant about enforcing its policy barring children under 13 from using the app. CEO Linda Yaccarino said X, formerly Twitter, doesn’t cater to children.

“We do not have a line of business dedicated to children,” Yaccarino said. She said the company will also support Stop CSAM Act, a federal bill that makes it easier for victims of child exploitation to sue tech companies.

Yet child health advocates say social media companies have failed repeatedly to protect minors.

Profits should not be the primary concern when companies are faced with safety and privacy decisions, said Zamaan Qureshi, co-chair of Design It For Us, a youth-led coalition advocating for safer social media. “These companies have had opportunities to do this before they failed to do that. So independent regulation needs to step in.”

Republican and Democratic senators came together in a rare show of agreement throughout the hearing, though it’s not yet clear if this will be enough to pass legislation such as the Kids Online Safety Act, proposed in 2022 by Sens. Richard Blumenthal of Connecticut and Marsha Blackburn of Tennessee.

“There is pretty clearly a bipartisan consensus that the status quo isn’t working,” said New Mexico Attorney General Raúl Torrez, a Democrat. “When it comes to how these companies have failed to prioritize the safety of children, there’s clearly a sense of frustration on both sides of the aisle.”

Meta is being sued by dozens of states that say it deliberately designs features on Instagram and Facebook that addict children to its platforms. New Mexico filed a separate lawsuit saying the company has failed to protect them from online predators.

New internal emails between Meta executives released by Blumenthal’s office show Nick Clegg, the company’s president of global affairs, and others asking Zuckerberg to hire more people to strengthen “wellbeing across the company” as concerns grew about effects on youth mental health.

“From a policy perspective, this work has become increasingly urgent over recent months. Politicians in the U.S., U.K., E.U. and Australia are publicly and privately expressing concerns about the impact of our products on young people’s mental health,” Clegg wrote in an August 2021 email.

The emails released by Blumenthal’s office don’t appear to include a response, if there was any, from Zuckerberg. In September 2021, The Wall Street Journal released the Facebook Files, its report based on internal documents from whistleblower Frances Haugen, who later testified before the Senate. Clegg followed up on the August email in November with a scaled-down proposal but it does not appear that anything was approved.

“I’ve spoken to many of the parents at the hearing. The harm their children experienced, all that loss of innocent life, is eminently preventable. When Mark says ‘Our job is building the best tools we can,’ that is just not true,” said Arturo Béjar, a former engineering director at the social media giant known for his expertise in curbing online harassment who recently testified before Congress about child safety on Meta’s platforms. “They know how much harm teens are experiencing, yet they won’t commit to reducing it, and most importantly to be transparent about it. They have the infrastructure to do it, the research, the people, it is a matter of prioritization.”

Béjar said the emails and Zuckerberg’s testimony show that Meta and its CEO “do not care about the harm teens experience” on their platforms.

“Nick Clegg writes about profound gaps with addiction, self-harm, bullying and harassment to Mark. Mark did not respond, and those gaps are unaddressed today. Clegg asked for 84 engineers of 30,000,” Béjar said. “Children are not his priority.”

___

Associated Press writer Mary Clare Jalonick contributed to this story.

]]>
6437016 2024-01-30T18:04:48+00:00 2024-01-31T18:25:35+00:00
Social media companies made $11 billion in US ad revenue from minors, Harvard study finds https://www.pilotonline.com/2023/12/27/social-media-companies-made-11-billion-in-us-ad-revenue-from-minors-harvard-study-finds/ Wed, 27 Dec 2023 19:01:58 +0000 https://www.pilotonline.com/?p=6145396&preview=true&preview_id=6145396 By BARBARA ORTUTAY and HALELUYA HADERO (AP Technology Writers)

Social media companies collectively made over $11 billion in U.S. advertising revenue from minors last year, according to a study from the Harvard T.H. Chan School of Public Health published on Wednesday.

The researchers say the findings show a need for government regulation of social media since the companies that stand to make money from children who use their platforms have failed to meaningfully self-regulate. They note such regulations, as well as greater transparency from tech companies, could help alleviate harms to youth mental health and curtail potentially harmful advertising practices that target children and adolescents.

To come up with the revenue figure, the researchers estimated the number of users under 18 on Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter) and YouTube in 2022 based on population data from the U.S. Census and survey data from Common Sense Media and Pew Research. They then used data from research firm eMarketer, now called Insider Intelligence, and Qustodio, a parental control app, to estimate each platform’s U.S. ad revenue in 2022 and the time children spent per day on each platform. After that, the researchers said they built a simulation model using the data to estimate how much ad revenue the platforms earned from minors in the U.S.

Researchers and lawmakers have long focused on the negative effects stemming from social media platforms, whose personally-tailored algorithms can drive children towards excessive use. This year, lawmakers in states like New York and Utah introduced or passed legislation that would curb social media use among kids, citing harms to youth mental health and other concerns.

Meta, which owns Instagram and Facebook, is also being sued by dozens of states for allegedly contributing to the mental health crisis.

“Although social media platforms may claim that they can self-regulate their practices to reduce the harms to young people, they have yet to do so, and our study suggests they have overwhelming financial incentives to continue to delay taking meaningful steps to protect children,” said Bryn Austin, a professor in the Department of Social and Behavioral Sciences at Harvard and a senior author on the study.

The platforms themselves don’t make public how much money they earn from minors.

Social media platforms are not the first to advertise to children, and parents and experts have long expressed concerns about marketing to kids online, on television and even in schools. But online ads can be especially insidious because they can be targeted to children and because the line between ads and the content kids seek out is often blurry.

In a 2020 policy paper, the American Academy of Pediatrics said children are “uniquely vulnerable to the persuasive effects of advertising because of immature critical thinking skills and impulse inhibition.”

“School-aged children and teenagers may be able to recognize advertising but often are not able to resist it when it is embedded within trusted social networks, encouraged by celebrity influencers, or delivered next to personalized content,” the paper noted.

As concerns about social media and children’s mental health grow, the Federal Trade Commission earlier this month proposed sweeping changes to a decades-old law that regulates how online companies can track and advertise to children. The proposed changes include turning off targeted ads to kids under 13 by default and limiting push notifications.

According to the Harvard study, YouTube derived the greatest ad revenue from users 12 and under ($959.1 million), followed by Instagram ($801.1 million) and Facebook ($137.2 million).

Instagram, meanwhile, derived the greatest ad revenue from users aged 13-17 ($4 billion), followed by TikTok ($2 billion) and YouTube ($1.2 billion).

The researchers also estimate that Snapchat derived the greatest share of its overall 2022 ad revenue from users under 18 (41%), followed by TikTok (35%), YouTube (27%), and Instagram (16%).

]]>
6145396 2023-12-27T14:01:58+00:00 2023-12-27T15:17:53+00:00
The year of social media soul-searching: Twitter dies, X and Threads are born and AI gets personal https://www.pilotonline.com/2023/12/22/the-year-of-social-media-soul-searching-twitter-dies-x-and-threads-are-born-and-ai-gets-personal/ Fri, 22 Dec 2023 22:51:01 +0000 https://www.pilotonline.com/?p=6119080&preview=true&preview_id=6119080 We lost Twitter and got X. We tried out Bluesky and Mastodon (well, some of us did). We fretted about AI bots and teen mental health. We cocooned in private chats and scrolled endlessly as we did in years past. For social media users, 2023 was a year of beginnings and endings, with some soul-searching in between.

Here’s a look back some of the biggest stories in social media in 2023 — and what to watch for next year:

A little more than a year ago, Elon Musk walked into Twitter ’s San Francisco headquarters, fired its CEO and other top executives and began transforming the social media platform into what’s now known as X.

Musk revealed the X logo in July. It quickly replaced Twitter’s name and its whimsical blue bird icon, online and on the company’s San Francisco headquarters.

“And soon we shall bid adieu to the twitter brand and, gradually, all the birds,” Musk posted on the site.

Because of its public nature and because it attracted public figures, journalists and other high-profile users, Twitter always had an outsized influence on popular culture — but that influence seems to be waning.

“It had a lot of problems even before Musk took it over, but it was beloved brand with a clear role in the social media landscape,” said Jasmine Enberg, a social media analyst at Insider Intelligence. “There are still moments of Twitter magic on the platform, like when journalists took the platform to post real-time updates about the OpenAI drama, and the smaller communities on the platform remain important to many users. But the Twitter of the past 17 years is largely gone, and X’s reason for existence is murky.”

Since Musk’s takeover, X has been bombarded by allegations of misinformation and racism, endured significant advertising losses and suffered declines in usage. It didn’t help when Musk went on an expletive-ridden rant in an on-stage interview about companies that had halted spending on X. Musk asserted that advertisers that pulled out were engaging in “blackmail” and, using a profanity, essentially told them to get lost.

Continuing the trend of welcoming back users who had been banned by the former Twitter for hate speech or spreading misinformation, in December, Musk restored the X account of conspiracy theorist Alex Jones, pointing to an unscientific poll he posted to his followers that came out in favor of the Infowars host who repeatedly called the 2012 Sandy Hook school shooting a hoax.

LGBTQ and other organizations supporting marginalized groups, meanwhile, have been raising alarms about X becoming less safe. In April, for instance, it quietly removed a policy against the “targeted misgendering or deadnaming of transgender individuals. In June, the advocacy group GLAAD called it “the most dangerous platform for LGBTQ people.”

GLSEN, an LGBTQ education group, announced in December that it was leaving X, joining other groups such as the suicide prevention nonprofit Trevor Project, saying that Musk’s changes “have birthed a new platform that enables its users to harass and target the LGBTQ+ community without restriction or discipline.”

Musk’s ambitions for X include transforming the platform into an “everything app” — like China’s WeChat, for instance. The problem? It’s not clear if U.S. and Western audiences are keen on the idea. And Musk himself has been pretty vague on the specifics.

While X contends with an identity crisis, some users began looking for a replacement. Mastodon was one contender, along with Bluesky, which actually grew out of Twitter — a pet project of former CEO Jack Dorsey, who still sits on its board of directors.

When tens of thousands of people, many of them fed-up Twitter users, began signing up for the (still) invite-only Bluesky in the spring, the app had less than 10 people working on it, said CEO Jay Graber recently.

This meant “scrambling to keep everything working, keeping people online, scrambling to add features that we had on the roadmap,” she said. For weeks, the work was simply “scaling” — ensuring that the systems could handle the influx.

“We had one person on the app for a while, which was very funny, and there were memes about Paul versus all of Twitter’s engineers,” she recalled. “I don’t think we hired a second app developer until after the crazy growth spurt.”

Seeing an opportunity to lure in disgruntled Twitter users, Facebook parent Meta launched its own rival, Threads, in July. It soared to popularity as tens of millions began signing up — though keeping people on has been a bit of a challenge. Then, in December, Meta CEO Mark Zuckerberg announced in a surprise move that the company was testing interoperability — the idea championed by Mastodon, Bluesky and other decentralized social networks that people should be able to use their accounts on different platforms — kind of like your email address or phone number.

“Starting a test where posts from Threads accounts will be available on Mastodon and other services that use the ActivityPub protocol,” Zuckerberg posted on Threads in December. “Making Threads interoperable will give people more choice over how they interact and it will help content reach more people. I’m pretty optimistic about this.”

Social media’s impact on children’s mental health hurtled toward a reckoning this year, with the U.S. surgeon general warning in May that there is not enough evidence to show that social media is safe for children and teens — and calling on tech companies, parents and caregivers to take “immediate action to protect kids now.”

“We’re asking parents to manage a technology that’s rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage,” Dr. Vivek Murthy told The Associated Press. “And we’re putting all of that on the shoulders of parents, which is just simply not fair.”

In October, dozens of U.S. states sued Meta for harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

In November, Arturo Béjar, a former engineering director at Meta, testified before a Senate subcommittee about social media and the teen mental health crisis, hoping to shed light on how Meta executives, including Zuckerberg, knew about the harms Instagram was causing but chose not to make meaningful changes to address them.

The testimony came amid a bipartisan push in Congress to adopt regulations aimed at protecting children online. In December, the Federal Trade Commission proposed sweeping changes to a decades-old law that regulates how online companies can track and advertise to children, including turning off targeted ads to kids under 13 by default and limiting push notifications.

Your AI friends have arrived — but chatbots are just the beginning. Standing in a courtyard at his company’s Menlo Park, California headquarters, Zuckerberg said this fall that Meta is “focused on building the future of human connection” — and painted a near-future where people interact with hologram versions of their friends or coworkers and with AI bots built to assist them. The company unveiled an army of AI bots — with celebrities such as Snoop Dogg and Paris Hilton lending their faces to play them — that social media users can interact with.

Next year, AI will be “integrated into virtually every corner of the platforms,” Enberg said.

“Social apps will use AI to drive usage, ad performance and revenues, subscription sign ups, and commerce activity. AI will deepen both users’ and advertisers’ reliance and relationship with social media, but its implementation won’t be entirely smooth sailing as consumer and regulatory scrutiny will intensify,” she added.

The analyst also sees subscriptions as an increasingly attractive revenue stream for some platforms. Inspired by Musk’s X, subscriptions “started as a way to diversify or boost revenues as social ad businesses took a hit, but they have persisted and expanded even as the social ad market has steadied itself.”

With major elections coming up in the U.S. and India among other countries, AI’s and social media’s role in misinformation will continue to be front and center for social media watchers.

“We’re not prepared for this,” A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox, told the AP in May. ”To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

]]>
6119080 2023-12-22T17:51:01+00:00 2023-12-25T14:39:41+00:00
X says Walmart pulled ads in October, weeks before hate speech report and Musk rant https://www.pilotonline.com/2023/12/01/x-says-walmart-pulled-ads-in-october-weeks-before-media-matters-hate-speech-report-and-musk-rant/ Fri, 01 Dec 2023 20:20:03 +0000 https://www.pilotonline.com/?p=5870750&preview=true&preview_id=5870750 Walmart is the latest company to publicly join the growing flock of major advertisers to pull spending from X, Elon Musk’s beleaguered social media company, amid concerns about hate speech — as well as reaching a sizeable audience on the platform.

“We aren’t advertising on X as we’ve found some other platforms better reach our customers,” Walmart said in a statement.

While Walmart went public with the pullout on Friday, Joe Benarroch, head of operations at X, said the company has not advertised on the platform since October. The company, he added, “has just been organically connecting with its community of more than one million people on X.”

Walmart did not immediately respond to a message for further comment on Friday afternoon.

The announcement comes two days after Musk went on an expletive-ridden rant in an on-stage interview with journalist Andrew Ross Sorkin about companies halting spending on X, formerly known as Twitter, in response to antisemitic and other hateful material. Musk said advertisers pulling out are engaging in “blackmail” and, using a profanity, essentially told them to go away.

“Don’t advertise,” Musk said.

Besides Walmart, the Walt Disney Co., IBM, NBCUniversal and its parent company Comcast have also decided to stop spending on X. Many pulled out earlier this month after the liberal advocacy group Media Matters issued a report showing their ads were appearing alongside material praising Nazis. X has sued the group, saying it “manufactured” the report in order to “drive advertisers from the platform and destroy X Corp.”

X’s CEO, Linda Yaccarino, is a former NBCUniversal executive who was hired by Musk to rebuild ties with advertisers who fled after he took over, concerned that his easing of content restrictions was allowing hateful and toxic speech to flourish and that would harm their brands. But X’s relations with advertisers don’t appear to be improving.

“Walmart has a wonderful community of more than a million people on X, and with a half a billion people on X, every year the platform experiences 15 billion impressions about the holidays alone with more than 50% of X users doing most or all of their shopping online,” Benarroch said in a statement.

]]>
5870750 2023-12-01T15:20:03+00:00 2023-12-01T18:15:51+00:00
A Meta engineer saw his own child face harassment on Instagram. Now, he’s testifying before Congress https://www.pilotonline.com/2023/11/07/a-meta-engineer-saw-his-own-child-face-harassment-on-instagram-now-hes-testifying-before-congress/ Tue, 07 Nov 2023 14:32:45 +0000 https://www.pilotonline.com/?p=5800101&preview=true&preview_id=5800101 On the same day whistleblower Frances Haugen was testifying before Congress about the harms of Facebook and Instagram to children in the fall of 2021, a former engineering director at the social media giant who had rejoined the company as a consultant sent an alarming email to Meta CEO Mark Zuckerberg about the same topic.

Arturo Béjar, known for his expertise on curbing online harassment, recounted to Zuckerberg his own daughter’s troubling experiences with Instagram. But he said his concerns and warnings went unheeded. And on Tuesday, it was Béjar’s turn to testify to Congress.

“I appear before you today as a dad with firsthand experience of a child who received unwanted sexual advances on Instagram,” he told a panel of U.S. senators.

Béjar worked as an engineering director at Facebook from 2009 to 2015, attracting wide attention for his work to combat cyberbullying. He thought things were getting better. But between leaving the company and returning in 2019 as a contractor, Béjar’s own daughter had started using Instagram.

“She and her friends began having awful experiences, including repeated unwanted sexual advances, harassment,” he testified Tuesday. “She reported these incidents to the company and it did nothing.”

In the 2021 note, as first reported by The Wall Street Journal, Béjar outlined a “critical gap” between how the company approached harm and how the people who use its products — most notably young people — experience it.

“Two weeks ago my daughter, 16, and an experimenting creator on Instagram, made a post about cars, and someone commented ‘Get back to the kitchen.’ It was deeply upsetting to her,” he wrote. “At the same time the comment is far from being policy violating, and our tools of blocking or deleting mean that this person will go to other profiles and continue to spread misogyny. I don’t think policy/reporting or having more content review are the solutions.”

Béjar testified before a Senate subcommittee on Tuesday about social media and the teen mental health crisis, hoping to shed light on how Meta executives, including Zuckerberg, knew about the harms Instagram was causing but chose not to make meaningful changes to address them.

He believes that Meta needs to change how it polices its platforms, with a focus on addressing harassment, unwanted sexual advances and other bad experiences even if these problems don’t clearly violate existing policies. For instance, sending vulgar sexual messages to children doesn’t necessarily break Instagram’s rules, but Béjar said teens should have a way to tell the platform they don’t want to receive these types of messages.

“I can safely say that Meta’s executives knew the harm that teenagers were experiencing, that there were things that they could do that are very doable and that they chose not to do them,” Béjar told The Associated Press. This, he said, makes it clear that “we can’t trust them with our children.”

Opening the hearing Tuesday, Sen. Richard Blumenthal, a Connecticut Democrat who chairs the Senate Judiciary’s privacy and technology subcommittee, introduced Béjar as an engineer “widely respected and admired in the industry” who was hired specifically to help prevent harms against children but whose recommendations were ignored.

“What you have brought to this committee today is something every parent needs to hear,” added Missouri Sen. Josh Hawley, the panel’s ranking Republican.

Béjar pointed to user surveys carefully crafted by the company that show, for instance, that 13% of Instagram users — ages 13-15 — reported having received unwanted sexual advances on the platform within the previous seven days.

Béjar said he doesn’t believe the reforms he’s suggesting would significantly affect revenue or profits for Meta and its peers. They are not intended to punish the companies, he said, but to help teenagers.

“You heard the company talk about it ‘oh this is really complicated,’” Béjar told the AP. “No, it isn’t. Just give the teen a chance to say ‘this content is not for me’ and then use that information to train all of the other systems and get feedback that makes it better.”

The testimony comes amid a bipartisan push in Congress to adopt regulations aimed at protecting children online.

Meta, in a statement, said “Every day countless people inside and outside of Meta are working on how to help keep young people safe online. The issues raised here regarding user perception surveys highlight one part of this effort, and surveys like these have led us to create features like anonymous notifications of potentially hurtful content and comment warnings. Working with parents and experts, we have also introduced over 30 tools to support teens and their families in having safe, positive experiences online. All of this work continues.”

Regarding unwanted material users see that does not violate Instagram’s rules, Meta points to its 2021 ” content distribution guidelines ” that say “problematic or low quality” content automatically receives reduced distribution on users’ feeds. This includes clickbait, misinformation that’s been fact-checked and “borderline” posts, such as a ”photo of a person posing in a sexually suggestive manner, speech that includes profanity, borderline hate speech, or gory images.”

In 2022, Meta also introduced “kindness reminders” that tell users to be respectful in their direct messages — but it only applies to users who are sending message requests to a creator, not a regular user.

Tuesday’s testimony comes just two weeks after dozens of U.S. states sued Meta for harming young people and contributing to the youth mental health crisis. The lawsuits, filed in state and federal courts, claim that Meta knowingly and deliberately designs features on Instagram and Facebook that addict children to its platforms.

Béjar said it is “absolutely essential” that Congress passes bipartisan legislation “to help ensure that there is transparency about these harms and that teens can get help” with the support of the right experts.

“The most effective way to regulate social media companies is to require them to develop metrics that will allow both the company and outsiders to evaluate and track instances of harm, as experienced by users. This plays to the strengths of what these companies can do, because data for them is everything,” he wrote in his prepared testimony.

]]>
5800101 2023-11-07T09:32:45+00:00 2023-11-07T17:28:40+00:00
Twitter takeover: 1 year later, X struggles with misinformation, advertising and usage decline https://www.pilotonline.com/2023/10/27/twitter-takeover-1-year-later-x-struggles-with-misinformation-advertising-and-usage-decline/ Fri, 27 Oct 2023 04:06:02 +0000 https://www.pilotonline.com/?p=5636925&preview=true&preview_id=5636925 By BARBARA ORTUTAY (AP Technology Writer)

SAN FRANCISCO (AP) — One year ago, billionaire and new owner Elon Musk walked into Twitter’s San Francisco headquarters with a white bathroom sink and a grin, fired its CEO and other top executives and began transforming the social media platform into what is now known as X.

X looks and feels something like Twitter, but the more time you spend on it the clearer it becomes that it’s merely an approximation. Musk has dismantled core features of what made Twitter, Twitter — its name and blue bird logo, its verification system, its Trust and Safety advisory group. Not to mention content moderation and hate speech enforcement.

He also fired, laid off or lost the majority of its workforce — engineers who keep the site running, moderators who keep it from being overrun with hate, executives in charge of making rules and enforcing them.

The result, long-term Twitter watchers say, has been the end of the platform’s role as an imperfect but useful place to find out what’s going on in the world. What X will become, and whether Musk can achieve his ambition of turning it into an “everything app” that everyone uses, remains as unclear as it was a year ago.

“Musk hasn’t managed to make a single meaningful improvement to the platform and is no closer to his vision of an ‘everything app,’ than he was a year ago,” said Insider Intelligence analyst Jasmine Enberg. “Instead, X has driven away users, advertisers, and now it has lost its primary value proposition in the social media world: Being a central hub for news.”

As one of the platform’s most popular and prolific users even before he bought the company, Musk had a unique experience on Twitter that is markedly different from how regular users experience it. But many of the changes he’s introduced to X has been based on his own impressions of the site — in fact, he even polled his millions of followers for advice on how to run it (they said he should step down).

“Musk’s treatment of the platform as a technology company that he could remake in his vision rather than a social network fueled by people and ad dollars has been the single largest cause of the demise of Twitter,” Enberg said.

The blue checkmarks that once signified that the person or institution behind an account was who they said they are — a celebrity, athlete, journalist from global or local publication, a nonprofit agency — now merely shows that someone pays $8 a month for a subscription service that boosts their posts above un-checked users. It’s these paying accounts that have been found to spread misinformation on the platform that is often amplified by its algorithms.

On Thursday, for instance, a new report from the left-leaning nonprofit Media Matters found that numerous blue-checked X accounts with tens of thousands of followers claimed that the mass shooting in Maine was a “false flag,” planned by the government. Researchers also found such accounts spreading misinformation and propaganda about the Israel-Hamas war — so much so that the European Commission made a formal, legally binding request for information to X over its handling of hate speech, misinformation and violent terrorist content related to the war.

Ian Bremmer, a prominent foreign policy expert, posted on X this month that the level of disinformation on the Israel-Hamas war “being algorithmically promoted” on the platform “is unlike anything I’ve ever been exposed to in my career as a political scientist.”

It’s not just the platform’s identity that’s on shaky grounds. Twitter was already struggling financially when Musk purchased it for $44 billion in a deal that closed Oct. 27, 2022, and the situation appears more precarious today. Musk took the company private, so its books are no longer public — but in July, the Tesla CEO said the company had lost about half of its advertising revenue and continues to face a large debt load.

“We’re still negative cash flow,” he posted on the site on July 14, due to about a “50% drop in advertising revenue plus heavy debt load.”

“Need to reach positive cash flow before we have the luxury of anything else,” he said.

In May, Musk hired Linda Yaccarino, a former NBC executive with deep ties to the advertising industry in an attempt to lure back top brands, but the effort has been slow to pay off. While some advertisers have returned to X, they are not spending as much as they did in the past — despite a rebound in the online advertising market that boosted the most recent quarterly profits for Facebook parent company, Meta, and Google parent company, Alphabet.

Insider Intelligence estimates that X will bring in $1.89 billion in advertising revenue this year, down 54% from 2022. The last time its ad revenue was near this level was in 2015, when it came in at $1.99 billion. In 2022, it was $4.12 billion according to the research firm’s estimates.

Outside research also shows that people are using X less.

According to research firm Similarweb, global web traffic to Twitter.com was down 14%, year-over-year, and traffic to the ads.twitter.com portal for advertisers was down 16.5%. Performance on mobile was no better, down 17.8% year-over-year based on combined monthly active users for Apple’s iOS and Android.

“Even though the cultural relevance of Twitter was already starting to decline,” before Musk took it over, “it’s as if the platform no longer exists. And it’s been a death by a thousand cuts,” Enberg said.

“What’s really fascinating is that almost all of the wounds have been self-inflicted. Usually when a social platform, starts to lose its relevance there are at least some external factors at play, but that’s not the case here.”

]]>
5636925 2023-10-27T00:06:02+00:00 2023-10-31T02:15:21+00:00
Virginia joins states suing Meta, claiming its social platforms are addictive and harm children’s mental health https://www.pilotonline.com/2023/10/24/states-sue-meta-claiming-its-social-platforms-are-addictive-and-harm-childrens-mental-health/ Tue, 24 Oct 2023 18:14:05 +0000 https://www.pilotonline.com/?p=5589717&preview=true&preview_id=5589717 Dozens of US states, including Virginia, California and New York, are suing Meta Platforms Inc. for harming young people’s mental health and contributing the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

A lawsuit filed by 33 states in federal court in California, claims that Meta routinely collects data on children under 13 without their parents’ consent, in violation of federal law. In addition, nine attorneys general are filing lawsuits in their respective states, bringing the total number of states taking action to 41 and Washington, D.C.

“Meta has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens. Its motive is profit, and in seeking to maximize its financial gains, Meta has repeatedly misled the public about the substantial dangers of its social media platforms,” the complaint says. “It has concealed the ways in which these platforms exploit and manipulate its most vulnerable consumers: teenagers and children.”

The suits seek financial damages and restitution and an end to Meta’s practices that are in violation of the law.

“Kids and teenagers are suffering from record levels of poor mental health and social media companies like Meta are to blame,” said New York Attorney General Letitia James in a statement. “Meta has profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem.”

In a statement, Meta said it shares “the attorneys general’s commitment to providing teens with safe, positive experiences online, and have already introduced over 30 tools to support teens and their families.”

“We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” the company added.

The broad-ranging federal suit is the result of an investigation led by a bipartisan coalition of attorneys general from California, Florida, Kentucky, Massachusetts, Nebraska, New Jersey, Tennessee, and Vermont. It follows damning newspaper reports, first by The Wall Street Journal in the fall of 2021, based on the Meta’s own research that found that the company knew about the harms Instagram can cause teenagers — especially teen girls — when it comes to mental health and body image issues. One internal study cited 13.5% of teen girls saying Instagram makes thoughts of suicide worse and 17% of teen girls saying it makes eating disorders worse.

Following the first reports, a consortium of news organizations, including The Associated Press, published their own findings based on leaked documents from whistleblower Frances Haugen, who has testified before Congress and a British parliamentary committee about what she found.

“Meta has been harming our children and teens, cultivating addiction to boost corporate profits,” said California Attorney General Rob Bonta. “With today’s lawsuit, we are drawing the line.”

The use of social media among teens is nearly universal in the U.S. and many other parts of the world. Almost all teens ages 13 to 17 in the U.S. report using a social media platform, with about a third saying they use social media “almost constantly,” according to the Pew Research Center.

To comply with federal regulation, social media companies ban kids under 13 from signing up to their platforms — but children have been shown to easily get around the bans, both with and without their parents’ consent, and many younger kids have social media accounts. The states’ complaint says Meta knowingly violated this law, the Children’s Online Privacy Protection Act, by collecting data on children without informing and getting permission from their parents.

Other measures social platforms have taken to address concerns about children’s mental health are also easily circumvented. For instance, TikTok recently introduced a default 60-minute time limit for users under 18. But once the limit is reached, minors can simply enter a passcode to keep watching. TikTok, Snapchat and other social platforms that have also been blamed for contributing to the youth mental health crisis are not part of Tuesday’s lawsuit.

Washington D.C. Attorney General Brian Schwalb wouldn’t comment on whether they’re also looking at TikTok or Snapchat. For now they’re focusing on the Meta empire of Facebook and Instagram, he said.

“They’re the worst of the worst when it comes to using technology to addict teenagers to social media, all in the furtherance of putting profits over people.”

In May, U.S. Surgeon General Dr. Vivek Murthy called on tech companies, parents and caregivers to take “immediate action to protect kids now” from the harms of social media.

__

Associated Press Writers Michael Casey, Michael Goldberg, Susan Haigh, Maysoon Khan and Ashraf Khalil contributed to this story.

]]>
5589717 2023-10-24T14:14:05+00:00 2023-10-24T14:16:07+00:00
Sony’s Access controller for the PlayStation aims to make gaming easier for people with disabilities https://www.pilotonline.com/2023/10/12/sonys-access-controller-for-the-playstation-aims-to-make-gaming-easier-for-people-with-disabilities/ Thu, 12 Oct 2023 17:44:00 +0000 https://www.pilotonline.com/?p=5424009&preview=true&preview_id=5424009 SAN MATEO, Calif. (AP) — Paul Lane uses his mouth, cheek and chin to push buttons and guide his virtual car around the “Gran Turismo” racetrack on the PlayStation 5. It’s how he’s been playing for the past 23 years, after a car accident left him unable to use his fingers.

Playing video games has long been a challenge for people with disabilities, chiefly because the standard controllers for the PlayStation, Xbox or Nintendo can be difficult, or even impossible, to maneuver for people with limited mobility. And losing the ability to play the games doesn’t just mean the loss of a favorite pastime, it can also exacerbate social isolation in a community already experiencing it at a far higher rate than the general population.

As part of the gaming industry’s efforts to address the problem, Sony has developed the Access controller for the PlayStation, working with input from Lane and other accessibility consultants. Its the latest addition to the accessible-controller market, whose contributors range from Microsoft to startups and even hobbyists with 3D printers.

“I was big into sports before my injury,” said Cesar Flores, 30, who uses a wheelchair since a car accident eight years ago and also consulted Sony on the controller. “I wrestled in high school, played football. I lifted a lot of weights, all these little things. And even though I can still train in certain ways, there are physical things that I can’t do anymore. And when I play video games, it reminds me that I’m still human. It reminds me that I’m still one of the guys.”

Putting the traditional controller aside, Lane, 52, switches to the Access. It’s a round, customizable gadget that can rest on a table or wheelchair tray and can be configured in myriad ways, depending on what the user needs. That includes switching buttons and thumbsticks, programming special controls and pairing two controllers to be used as one. Lane’s “Gran Turismo” car zooms around a digital track as he guides it with the back of his hand on the controller.

“I game kind of weird, so it’s comfortable for me to be able to use both of my hands when I game,” he said. “So I need to position the controllers away enough so that I can be able to to use them without clunking into each other. Being able to maneuver the controllers has been awesome, but also the fact that this controller can come out of the box and ready to work.”

Lane and other gamers have been working with Sony since 2018 to help design the Access controller. The idea was to create something that could be configured to work for people with a broad range of needs, rather than focusing on any particular disability.

“Show me a person with multiple sclerosis and I’ll show you a person who can be hard of hearing, I can show someone who has a visual impairment or a motor impairment,” said Mark Barlet, founder and executive director of the nonprofit AbleGamers. “So thinking on the label of a disability is not the approach to take. It’s about the experience that players need to bridge that gap between a game and a controller that’s not designed for their unique presentation in the world.”

Barlet said his organization, which helped both Sony and Microsoft with their accessible controllers, has been advocating for gamers with disabilities for nearly two decades. With the advent of social media, gamers themselves have been able to amplify the message and address creators directly in forums that did not exist before.

“The last five years I have seen the game accessibility movement go from indie studios working on some features to triple-A games being able to be played by people who identify as blind,” he said. “In five years, it’s been breathtaking.”

Microsoft, in a statement, said it was encouraged by the positive reaction to its Xbox Adaptive controller when it was released in 2018 and that it is “heartening to see others in the industry apply a similar approach to include more players in their work through a focus on accessibility.”

The Access controller will go on sale worldwide on Dec. 6 and cost $90 in the U.S.

Alvin Daniel, a senior technical program manager at PlayStation, said the device was designed with three principles in mind to make it “broadly applicable” to as many players as possible. First, the player does not have to hold the controller to use it. It can lay flat on a table, wheelchair tray or be mounted on a tripod, for instance. It was important for it to fit on a wheelchair tray, since once something falls off the tray, it might be impossible for the player to pick it up without help. It also had to be durable for this same reason — so it would survive being run over by a wheelchair, for example.

Second, it’s much easier to press the buttons than on a standard controller. It’s a kit, so it comes with button caps in different sizes, shapes and textures so people can experiment with reconfiguring it the way it works best for them. The third is the thumbsticks, which can also be configured depending on what works for the person using it.

Because it can be used with far less agility and strength than the standard PlayStation controller, the Access could also be a gamechanger for an emerging population: aging gamers suffering from arthritis and other limiting ailments.

“The last time I checked, the average age of a gamers was in their forties,” Daniel said. “And I have every expectation, speaking for myself, that they’ll want to continue to game, as I’ll want to continue to game, because it’s entertainment for us.”

After his accident, Lane stopped gaming for seven years. For someone who began playing video games as a young child on the Magnavox Odyssey — released in 1972 — “it was a void” in his life, he said.

Starting again, even with the limitations of a standard game controller, felt like being reunited with a “long lost friend.”

“Just the the social impact of gaming really changed my life. It gave me a a brighter disposition,” Lane said. He noted the social isolation that often results when people who were once able-bodied become disabled.

“Everything changes,” he said. “And the more you take away from us, the more isolated we become. Having gaming and having an opportunity to game at a very high level, to be able to do it again, it is like a reunion, (like losing) a close companion and being able to reunite with that person again.”

]]>
5424009 2023-10-12T13:44:00+00:00 2023-10-12T13:48:47+00:00
Parkland school shooting survivor develops Joy, an app built on AI that helps people heal https://www.pilotonline.com/2023/09/19/parkland-school-shooting-survivor-develops-joy-an-app-built-on-ai-that-helps-people-heal/ Wed, 20 Sep 2023 01:33:36 +0000 https://www.pilotonline.com/?p=5212066&preview=true&preview_id=5212066 Kai Koerber was a junior at Marjory Stoneman Douglas High School when a gunman murdered 14 students and three staff members there on Valentine’s Day in 2018. Seeing his peers — and himself — struggle with returning to normal, he wanted to do something to help people manage their emotions on their own terms.

While some of his classmates at the Parkland, Florida, school have worked on advocating for gun control, entered politics or simply took a step back to heal and focus on their studies, Koerber’s background in technology — he’d originally wanted to be a rocket scientist — led him in a different direction: to build a smartphone app.

The result was Joy: AI Wellness Platform, which uses artificial intelligence to suggest bite-sized mindfulness activities for people based on how they are feeling. The algorithm Koerber’s team built is designed to recognize how a person feels from the sound of their voice — regardless of the words or language they speak.

“In the immediate aftermath of the tragedy, the first thing that came to mind after we’ve experienced this horrible, traumatic event — how are we going to personally recover?” he said. “It’s great to say OK, we’re going to build a better legal infrastructure to prevent gun sales, increased background checks, all the legislative things. But people really weren’t thinking about … the mental health side of things.”

Like many of his peers, Koerber said he suffered from post-traumatic stress disorder for a “very long time” and only recently has it gotten a little better.

“So when I came to Cal, I was like, let me just start a research team that builds some groundbreaking AI and see if that’s possible,” said the 23-year-old, who graduated from the University of California at Berkeley earlier this year. “The idea was to provide a platform to people who were struggling with, let’s say sadness, grief, anger … to be able to get a mindfulness practice or wellness practice on the go that meets our emotional needs on the go.”

He said it was important to offer activities that can be done quickly, sometimes lasting just a few seconds, wherever the user might be. It wasn’t going to be your parents’ mindfulness practice.

“The notion of mindfulness being a solo activity or something that’s confined to sitting in your room breathing is something that we’re very much trying to dispel,” Koerber said.

Mohammed Zareef-Mustafa, a former classmate of Koerber’s who’s been using the app for a few months, said the voice-emotion recognition part is “different than anything I’ve ever seen before.”

“I use the app about three times a week, because the practices are short and easy to get into. It really helps me quickly de-stress before I have to do things like job interviews,” he said.

To use Joy, you simply speak into the app. The AI is supposed to recognize how you are feeling from your voice, then suggest short activities.

It doesn’t always get your mood right, so it’s possible to manually pick your disposition. Let’s say you are feeling “neutral” at the moment. The app suggests several activities, such as 15-second exercise called “mindful consumption” that encourages you to “think about all the lives and beings involved in producing what you eat or use that day.”

Yet another activity helps you practice making an effective apology. Another has you write a letter to your future self, with a pen and a paper — remember those? Feeling sad? A suggestion pops up asking you to track how many times you’ve laughed over a seven-day period and tally it up at the end of the week to see what moments gave you a sense of joy, purpose or satisfaction.

The iPhone app is available for a $8 monthly subscription, with a discount if you subscribe for a whole year. It’s a work in progress, and as it goes with AI, the more people use it, the more accurate it becomes.

“Kai is a leader of this next generation who are thinking intentionally and with focus about how to use technology to meet the mental, physical, and climate crises of our times,” said Dacher Keltner, a professor at UC Berkeley and Koerber’s faculty advisor on the project. “It comes out of his life experience, and, unlike past technologists, he seems to feel this has to be what technology does, make the world healthier.”

A plethora of wellness apps on the market claim to help people with mental health issues, but it’s not always clear whether they work, said Colin Walsh, a professor of biomedical informatics at Vanderbilt University who has studied the use of AI in suicide prevention. According to Walsh, it is feasible to take someone’s voice and glean some aspects of their emotional state.

“The challenge is if you as a user feel like it’s not really representing what you think your current state is like, that’s an issue,” he said. “There should be some mechanism by which that feedback can go back.”

The stakes also matter. Facebook, for instance, has faced some criticism in the past for its suicide prevention tool, which used AI (as well as humans) to flag users who may be contemplating suicide, and — in some serious cases — contact law enforcement to check on the person. But if the stakes are lower, Walsh said, if the technology is simply directing someone to spend some time outside, it’s unlikely to cause harm.

“The driver is there’s a huge demand there, or at least the perception of a huge demand there” Walsh said of the explosion of wellness and mental health apps in the past few years. “Despite the best of intentions with our current system — and it does a lot of good work — obviously, there’s still gaps. So I think people see technology as a tool to try to bridge that.”

Koerber said people tend to forget, after mass shootings, that survivors don’t just “bounce back right away” from the trauma they experienced. It takes years to recover.

“This is something that people carry with them, in some way, shape or form, for the rest of their lives,” he said.

His work has also been slower and deliberate than tech entrepreneurs of the past.

“I guess young Mark Zuckerberg was very ‘move fast and break things,’” he said. “And for me, I’m all about building quality products that, you know, serve social good in the end.”

]]>
5212066 2023-09-19T21:33:36+00:00 2023-09-20T14:54:07+00:00