This post was originally published on this site

Lawyers for Facebook, Twitter and Google balked Wednesday when asked by a senator if in the future their companies would consider notifying users who had seen advertising posted by fake accounts tied to foreign governments, primarily Russian operatives.

The idea, posed by Sen. Jack Reed, D-Rhode Island, came after nearly two hours of questioning by members of the Senate Select Intelligence Committee, who heard testimony on Russian attempts to influence the 2016 presidential election. Similar testimony was given yesterday to a Senate Judiciary subcommittee.

“When you discover a deceptive foreign government presentation on your platform, my presumption is you said you’ll stop it and take it down. Do you feel an obligation in turn to notify those people have accessed that?” Reed asked.

Facebook Vice President and General Counsel Colin Stretch said the company’s approach after identifying fake accounts is to first stop them and investigate their origins, then alert law enforcement and lawmakers. But Stretch indicated he doesn’t foresee the company notifying users.

“The question of reaching out to individuals who may have seen it is a much more difficult and complex one, but we believe our commitment to transparency on this issue generally should address that,” Stretch said.

The complexities go beyond just identifying who is targeted by individual ads, said Karen North, director of the USC Annenberg School’s Digital Social Media program, in a phone call with CBS News.

“You don’t know whether people paid attention to those ads,” North said. “And the idea that you would serve up spam to correct something that someone might not have paid attention to would hurt user experience.”

facebook-russian-ads.jpg

Congressional investigators released a selection of social media ads produced by Russian operatives to influence opinion in the U.S.

Google Senior Vice President and General Counsel Kent Walker echoed Stretch’s response, saying the company would have a hard time identifying precisely who saw individual ads because users are not required to be signed in when viewing content on Google platforms. 

It wasn’t clear if Twitter would have difficulty identifying the users who saw particular ads, but Sean Edgett, the company’s acting general counsel, said its community of users often police false content themselves.

“We see, as an open platform, active dialogue around a lot of this false information, fake information, right away. So when you see the tweets, you’re also seeing a number of replies to it showing people where to go, where other information is that’s accurate,” Edgett said. “But we will definitely take that idea back to explore how we could implement a process like that.” 

It was an answer that didn’t quite satisfy Gary Wilcox, a University of Texas at Austin professor who researches social media, advertising and branding.

“I think that’s kind of a cop-out,” Wilcox said during a phone call with CBS News. “That does invoke the interactive nature of the technology, but that’s kind of an easy way out.”

Both Wilcox and North pointed out that while it’s illegal to advertise false information in “traditional media” — newspapers, radio, television — doing so on social media is not against the law.

“This all comes back to the fact that we haven’t extended regulations from traditional to digital media,” North said.

But enforcing those regulations in the digital realm could be complicated, Wilcox said, noting that companies that falsely advertise on television can be forced to notify the public — a remedy that would be difficult to enforce when the misleading ad is placed by a troll farm on a different continent. And placing that responsibility on Facebook or Twitter might be unfair, he said.

“Should the medium do that? I don’t know, that’s taking a big leap,” Wilcox said.

© 2017 CBS Interactive Inc. All Rights Reserved.