X

AI and You: NYC Mayor Can't Really Speak Mandarin, the AI Money Trail, Who Sets the Rules

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

connieguglielmojan2022
connieguglielmojan2022
Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
10 min read
New York City Mayor Eric Adams at a microphone

Mayor Eric Adams has been criticized for not disclosing the use of AI voice translation tech in robocalls sent to New Yorkers. 

Lev Radin/VIEWpress

A question I often ask people in interviews is what tech they'd like to see invented. Popular requests include transporters, to get from place to place in a snap; clones, so they can effectively be in two places at once; and an AI robot/intelligence that can do household chores, like Rosey from The Jetsons, but also serve as a digital assistant managing schedules and answering complex questions, like Jarvis from The Avengers.

But whenever anyone asks me what tech I'd like, I always say the universal translator, which lets you understand and speak any language. 

When AI became a big deal in the past year and ChatGTP was offered on mobile phones, the Trekkie in me welcomed this iteration of the universal translator. I've translated emails into other languages (including Klingon and Sindarin Elvish) for friends and had text translated for me from Greek. Now with AI voice tech, you can have whatever you want not just translated into text but spoken, in your voice, into other languages. Pretty cool, right?

Of course, the key to doing something like that is transparency — telling the recipient that the words are yours, but the voice speaking isn't you so that you're not fooling them into thinking you've learned another language. And that's where things seem to have gone wrong for New York City Mayor Eric Adams this past week.

Adams and his tech team sent out messages via the city's robocall system in multiple languages using an AI voice translation tool from ElevenLabs. He says it was in part to address a New York law that requires "most public agencies to have a 'language access coordinator' and provide 'telephonic interpretation' in some 100 languages. It also requires important documents and direct services be translated in 10 languages: Arabic, Urdu, French, Polish, Spanish, Chinese, Russian, Bengali, Haitian Creole and Korean," according to The City news service.

Adams reached over "4 million New Yorkers through robocalls and sent thousands of calls in Spanish, more than 250 in Yiddish, more than 160 in Mandarin, 89 calls in Cantonese and 23 in Haitian Creole," a spokesperson for the mayor told reporters.

"People stop me on the street all the time and say, 'I didn't know you speak Mandarin, you know?'" Adams said, according to the Associated Press. "The robocalls that we're using, we're using different languages to speak directly to the diversity of New Yorkers."

The problem: He didn't disclose that AI was used to make him sound like a native speaker of those languages. And that drew the ire of some ethicists. "The mayor is making deepfakes of himself," Albert Fox Cahn, executive director of the watchdog group Surveillance Technology Oversight Project, told the AP. "This is deeply unethical, especially on the taxpayer's dime. Using AI to convince New Yorkers that he speaks languages that he doesn't is outright Orwellian. Yes, we need announcements in all of New Yorkers' native languages, but the deepfakes are just a creepy vanity project."

For his part, Adams dismissed ethical questions and told reporters that he's just trying to speak to his diverse constituents. "I got one thing: I've got to run the city, and I have to be able to speak to people in the languages that they understand, and I'm happy to do so," he said, according to the AP. "And so, to all, all I can say is a 'ni hao.'"

And all I'll say to Adams is "ghoHlaHchugh Hutlh NY ghotvam'e' Hoch tlhInganpu' je jatlhlaHbe'chugh QaQ yIn 'e' chaw'." That's Klingon for, "Disclose to NY residents that you're speaking to them thanks to AI voice translation tech." 

Here are the other doings in AI worth your attention.

Meta says regulation will curb innovation. Also AI still isn't as smart as your cat

Yann LeCun, Meta's chief AI scientist, cautioned against efforts to regulate AI, saying that such laws would be "counterproductive" because they would "only serve to reinforce the dominance of the big technology companies and stifle competition," reported The Financial Times this week. LeCun argues that big AI makers — that includes companies like OpenAI, Google and Microsoft — "want regulatory capture under the guise of AI safety."

Instead, LeCun believes that companies — like Meta, which has open-sourced its LLaMA generative AI large language model — would be unable to compete with the big tech players, who have a significant head start in the market. He told the FT that "similar arguments about the necessity of controlling fast-evolving technology ... had been made at the start of the internet but that technology had only flourished because it had remained an open, decentralized platform."

LeCun acknowledged that some regulatory efforts are being driven by fears that AI might undermine humanity. But he called those concerns "preposterous" and said that today's AI systems are still not as smart as a cat. While machines will be smarter than humans in some areas in the future, LeCun believes that's OK because the tech will help people solve complex problems. 

"The question is: Is that scary or exciting?" LeCun told the FT. "I think it's exciting because those machines will be doing our bidding. They will be under our control."

We hope.

Getting ordinary people to set rules for how AI chatbots work

Despite LeCun's warnings, regulators in the US and around the world are debating the best way to regulate generative AI. Meanwhile, Anthropic, the developer of Claude, is trying something different: asking average people to help write rules for its AI chatbot.

Its AI governance experiment, known as Collective Constitutional AI, expands on earlier work by the San Francisco-based company to create a "way of training large language models that relies on a written set of principles," reported The New York Times. "It is meant to give a chatbot clear instructions for how to handle sensitive requests, what topics are off-limits and how to act in line with human values."

There's been a lot of criticism of AI leaders who decided to release their tech — OpenAI's ChatGPT made its public debut in November 2022 — without first considering the implications of giving millions of people access to such powerful tools.  And the Times reminds us that as of right now, a small group of company leaders developing AI engines are the sole deciders of how their LLMs work "based on some combination of their personal ethics, commercial incentives and external pressure. There are no checks on that power, and there is no way for ordinary users to weigh in."  

According to the backgrounder on the Collective Constitutional AI posted on Oct. 17, Anthropic said it asked a demographically diverse group of 1,000 Americans to "draft a constitution for an AI system." The current constitution governing Claude was curated by Anthropic employees and based on outside sources including the United Nations Universal Declaration of Human Rights, the company added. 

You can read the draft constitution and Anthropic's findings about an "imperfect" process that it says remains very much in progress. While there was a 50% overlap in concepts and values between the public constitution and the Anthropic written-one, the company noted there were key differences.

"Principles in the public constitution appear to largely be self-generated and not sourced from existing publications, they focus more on objectivity and impartiality, they place a greater emphasis on accessibility, and in general, tend to promote desired behavior rather than avoid undesired behavior."

At the end of the day, Anthropic says "we're trying to find a way to develop a constitution that is developed by a whole bunch of third parties, rather than by people who happen to work at a lab in San Francisco," Anthropic's policy chief Jack Clark told the Times.  

Follow the money -- it leads to AI

Companies around the world are expected to spend $16 billion on generative AI tech in 2023, with market research firm IDC predicting that number will surge to $143 billion in just four years.

"Generative AI is more than a fleeting trend or mere hype. It is a transformative technology with far-reaching implications and business impact," said Ritu Jyoti, IDC's group vice president for worldwide artificial intelligence and automation research. "With ethical and responsible implementation, GenAI is poised to reshape industries, changing the way we work, play and interact with the world." 

Meanwhile, Activate Consulting offered up three interesting data points about AI in its 204-page analysis on the state of technology and media. The report is available here as a PDF. 

The firm found that 13 million people now start their web search on an AI service. Within four years, Activate forecasts that number will rise to 90 million. That echoes predictions from others that search engines need to evolve, which explains why Google and Microsoft are investing heavily in updating their respective search products.

When it comes to how people are using AI, Activate said 30% of consumers are using AI tools for writing, 25% are using it for content creation, 22% of users are turning to AI for self-help and 20% are now using AI as their personal assistants.

And when it comes to venture capital interest in AI companies, Activate saw a 181% jump in AI investments year over year, compared with a 42% decline in VC dollars going into all other segments. 

An AI-generated image of a spiky elecric guitar in front of a psychedelic green background

OpenAI's Dall-E 3 generative AI can create fanciful images like this one.

Stephen Shankland/CNET

Dall-E 3 produces more colorful images

OpenAI released its Dall-E 3 AI image technology to paying customers this week, with the new AI model designed to do a better job at understanding what your text prompts mean before turning them into images. It also aims to produce more detailed images and sidestep the legally fraught area of aping living artists' styles, writes CNET's Stephen Shankland.

"In my testing, I found Dall-E 3 a big step up from Dall-E 2 from 2022. Images were more vivid, detailed and often entertaining," said Shankland. "And they were more convincing, with fewer cases of distracting weirdness. New prompt-amplifying technology can make images more striking, but also sometimes go too far if you don't want to turn the volume up to 11," he added. 

"We are hoping the model will actually be able to understand natural language in a deeper way," said Gabriel Goh, one of the OpenAI researchers who helped build Dall-E 3. Shankland explains that the idea is to "better interpret phrases and descriptions, for example understanding that you want a mustache on a man in a scene and red hair on a woman. Also helpful: Following ChatGPT's more conversational interface, you can request followup refinements like 'now add a light green psychedelic background,' and Dall-E 3 will update its previous output."

With Dall-E 3, the image generation system is embedded directly into OpenAi's popular AI chatbot, ChatGPT. Dall-E is available to consumers for $20 per month. 

A 10-second voice clip may detect Type 2 diabetes

In a new study by Klick Labs, published by the Mayo Clinic, researchers used smartphone voice recordings to create an AI model that aims to help identify people who may be at risk for Type 2 diabetes.

Klick Labs asked 267 people to record a six-to-10-second phrase into their smartphone six times a day for two weeks. Using that voice data, along with basic health data about each person such as age, height and weight, the scientists analyzed the 18,000 recordings and then identified "14 acoustic features for differences between nondiabetic and Type 2 diabetic individuals."

They also noted that those vocal differences "manifested in different ways for males and females," with the researchers saying the AI model had an 89 percent accuracy rate for women and 86 percent rate for men.

Why is this a big deal? Klick notes that almost one in two, or 240 million adults living with diabetes around the world, don't even know they have the condition and that nearly 90% of diabetic cases are Type 2. "Current methods of detection can require a lot of time, travel and cost," said Jaycee Kaufman, first author of the paper and research scientist at Klick Labs. "Voice technology has the potential to remove these barriers entirely."

Deciphering ancient scrolls

By combining new AI tech with the technology used in CT scans, a computer scientist at the University of Kentucky named Brent Seales enabled scholars to decipher a word in a nearly 2,000-year-old papyrus scroll that was too fragile to unroll. 

The Herculaneum scroll yielded a "handful of letters and a single word: porphyras, ancient Greek for 'purple,'" The New York Times reported. The scroll is from a cache of about 800 discovered in 1752 by workers excavating a villa near Pompeii that was buried in volcanic mud after the eruption of Mount Vesuvius in 79 AD. 

"Unlike many ancient inks that contained metals, the ink used by the Herculaneum scribes was made from charcoal and water, and is barely distinguishable from the carbonized papyrus it rests on," the Times said, describing the scrolls as resembling lumps of coal. "Through constant refinements to Dr. Seales' technique, the latest being the use of AI to help distinguish ink from papyrus, the scrolls have at least begun yielding a smattering of letters." 

If you think this is cool, you can find more information on the experts' findings at the Vesuvius Challenge here.

AI word of the week: Alignment

Given the discussion around setting rules for AI, this week's word of the week speaks to the need to fine-tune an AI model, a process referred to as "alignment." This definition for alignment comes courtesy of CNBC's glossary on how to talk about AI like an insider.

"Alignment: The practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any artificial general intelligence (AGI) would be friendly towards humanity.

Example: "What these systems get aligned to — whose values, what those bounds are — that is somehow set by society as a whole, by governments. And so creating that dataset, our alignment dataset, it could be, an AI constitution, whatever it is, that has got to come very broadly from society," OpenAI CEO Sam Altman said during a Senate hearing."  

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.