Mouse has 2 biological dads; GPT4 saved this dog; Bing writes winning caption for New Yorker's cartoon; AI creates 3D scene from 1 pic
Should we pause AI research: a summary of what the main 4 AI tribes think.
Hi,
This is Thomas, Cofounder and CEO of digital agency KRDS (more about me at the end, see our latest game showreel here).
You're receiving Future Weekly, my personal selection of news about some of the most exciting (and sometimes scary) developments in technology 🤖 summarized as bullet points to help you save time and anticipate the future 🔮.
First, you'll find small bites about many different news, and then further down these summaries:
How Will AI Create or Destroy Jobs?
Should we pause AI research: a summary of what the main 4 AI tribes think.
Small Bites
AI is already taking video game illustrators’ jobs in China (source)
“AI is developing at a speed way beyond our imagination. Two people could potentially do the work that used to be done by 10.”
AI-generated art was so skilled that some illustrators talked about giving up drawing altogether. “Our way of making a living is suddenly destroyed,” said a game artist in Guangdong
36% of researchers believe that AI could cause a "nuclear-level catastrophe."
According to a survey conducted by Stanford University's Institute for Human-Centered AI
57% of researchers, for example, think that "recent research progress" is paving the way for artificial general intelligence.
AI Could Enable Humans to Work 4 Days a Week, Says Nobel Prize-Winning Economist (Time)
“We could increase our well-being generally from work and we could take off more leisure."
says Christopher Pissarides, who specializes in the impact of automation on work
Image generation : MidJourney, a self-funded 11-person team, does better than Adobe, see these examples from same prompt
one reason seems to be that Adobe Firefly is only trained on images that has Creative Commons and its own stock photos, so not good at generating copyrighted characters
but Adobe Firefly does worse even on examples with no copyrighted character
Samsung Workers Accidentally Leaked Trade Secrets via ChatGPT (source)
ChatGPT’s data policy says it uses data to train its models unless you request to opt out. In ChatGPT’s usage guide, it explicitly warns users not to share sensitive information in conversations.
One employee pasted confidential source code into the chat to check for errors.
Another shared a recording of a meeting to convert into notes for a presentation.
Demo of a prototype that tells you in real time what to say during a job interview
More cool pics about the pope generated by AI
Mice With Two Dads Were Born From Eggs (ovules) Made From Male Skin Cells (source)
Thanks to iPSC (induced pluripotent stem cell) technology, scientists have been able to bypass nature to engineer functional eggs, reconstruct artificial ovaries, and give rise to healthy mice from two mothers. Yet no one has been able to crack the recipe of healthy offspring born from two dads until now.
In a study published in Nature, researchers described how they scraped skin cells from the tails of male mice and used them to create functional egg cells. When fertilized with sperm and transplanted into a surrogate, the embryos gave rise to healthy pups, which grew up and had babies of their own.
but…the success rate in mice was very low at just a snippet over 1%, for now…
How Mars Rock Samples Would Make Their Way to Earth (more info here on NASA's website)
Vinod Khosla (cofounder of Sun Microsystems in 1982 and cleantech fund Khosla Ventures in 2004) on AI (source)
"In 25 years, 80% of all jobs will be capable of being done by an AI." "AI will ‘free humanity from the need to work’"
Wow : a computer model that can create realistic 3D pictures from just one photo, and it can show the same scene from different angles or even make a 3D video. (see examples in video here)
#GPT4 saved this dog's life. Read the story on Twitter
And at the same time : When Dean Buonomano, a neuroscientist at U.C.L.A., asked GPT-4 “What is the third word of this sentence?,” the answer was “third.”
These examples may seem trivial, but the cognitive scientist Gary Marcus wrote on Twitter that “I cannot imagine how we are supposed to achieve ethical and safety ‘alignment’ with a system that cannot understand the word ‘third’ even [with] billions of training examples.”
Bing AI (powered in part by GPT-4) invented one of the 3 winning captions at one New Yorker's cartoon caption contest. (source)
Bing AI wrote in response to a prompt describing the image and asking to come up with a funny abstract caption
the caption: “They’re not looking for the exit, they’re looking for meaning”
Bill Gates after trying a self-driving car in London: "We’ve made tremendous progress on autonomous vehicles, or AVs, in recent years, and I believe we’ll reach a tipping point within the next decade"
"The car drove us around downtown London, which is one of the most challenging driving environments imaginable, and it was a bit surreal to be in the car as it dodged all the traffic. (Since the car is still in development, we had a safety driver in the car just in case, and she assumed control several times.)"
"a vehicle made by the British company Wayve, which has a fairly novel approach. While a lot of AVs can only navigate on streets that have been loaded into their system, the Wayve vehicle operates more like a person. It can drive anywhere a human can drive."
His article, see the 2-min video of the ride on Youtube
Tech joke:“how many people work at Google? Oh, about half”
Finding this interesting? Share it on Whatsapp, just click here ❤️
If yes, feel free to take 3 seconds to forward that newsletter to one person, or share it on Whatsapp clicking here, I'd be immensely grateful 🙂
If that email was forwarded to you, you can click here to subscribe and make sure to receive future editions in your mailbox (many CEOs and startup founders are subscribers)
More to chew!
How Will AI Create or Destroy Jobs? It depends mostly on demand saturation (source)
Automation arrives and increases productivity by eliminating some human tasks (=lower labor costs).
This reduces the cost of the product.
If there is competition, this increases quality and reduces prices.
Demand soars.
This increase in demand creates so much more work that companies need to hire people to do the tasks that are not yet automated.
But at some point, this improvement in quality and price saturates demand. People don’t want more products. They stop buying much more.
But productivity keeps improving. More and more tasks are automated, but the volume of sales doesn’t grow as much.
Employment drops.
This explains why lots of agricultural jobs disappeared—and the horses: People prioritize food. If you increase productivity, you’ll produce more food, and people will buy a bit more if they just couldn’t afford it before. But once their hunger is covered, they don’t want more quantity. They want more quality. This can only create so much more work. You won’t buy 30 avocados because they’re 30 times cheaper.
Should we pause AI research: see what these 4 AI tribes think
A letter was published asking "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
Stating also: "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4"
the 4 tribes
1. Those who signed the letter, or didn’t but support similar views about both short, medium and long-term risks
Among the signatories: Yoshua Bengio, one of the 3 deep learning godfathers, the only one who’s stayed in a purely academic role:
“There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values. The short and medium-term risks –manipulation of public opinion for political purposes, especially through disinformation– are easy to predict, unlike the longer-term risks –AI systems that are harmful despite the programmers’ objectives,– and I think it is important to study both.”
Among those who didn’t sign but share similar views
Geoffrey Hinton, the oldest of the 3 deep learning godfathers, in a recent CBS itw:
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI," Hinton said. "And now I think it may be 20 years or less."
As for the odds of AI trying to wipe out humanity? "It's not inconceivable, that's all I'll say," Hinton said.
The bigger issue, he said, is that people need to learn to manage a technology that could give a handful of companies or governments an incredible amount of power.
"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said. "People should be thinking about those issues."
Demis Hassabis, CEO de Deepmind, the other world class AI lab, in Time magazine in Jan 2023 DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
2. Those who refused to sign because it didn’t emphasize enough short-term risks (fairness, algorithmic bias, accountability, privacy, transparency, inequality, cost to the environment, disinformation/accuracy, cybercrime) and hyped long-term ones way too much.
In particular AI researchers and cognitive scientists Emily M. Bender (read her reaction), Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell
Other short/medium-term risks mentioned in this NYT piece In A.I. Race, Microsoft and Google Choose Speed Over Caution :
it could hurt users who become emotionally attached to them,
People could believe chatbots, which often use an “I” and emojis, are human.
could enable “tech-facilitated violence” through mass harassment online
And of note: in 2020, Google blocked its top ethical A.I. researchers, Timnit Gebru and Margaret Mitchell, from publishing a paper warning that so-called large language models used in the new A.I. systems, which are trained to recognize patterns from vast amounts of data, could spew abusive or discriminatory language.
3. Those who refused to sign because they consider that we’ll solve AI issues while further developing AI, and that AI risks are well overhyped at this point while there are immediate benefits to unlock
4. Last, the most radical tribe, first of them being Eliezer Yudkowsky who refused to sign because they think the letter didn’t stress existential risk enough
Eliezer is the pioneer of “AI alignment research”
Eliezer Yudkowsky : "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” (check the op-ed in Time magazine Pausing AI Developments Isn't Enough. We Need to Shut it All Down)
The New York Times notes: Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence. He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI. They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
On AI existential risk: In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.
What about OpenAI’s position itself? Quite ambivalent:
An OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."
Sam Altman, CEO d'OpenAI :
in a recent interview to ABC News
Altman is “a bit scared”; he thinks society needs to be given time to adapt; they will adjust the technology as negative things occur; and we’ve got to learn as much as we can whilst the “stakes are still low”
"AI will destroy millions of jobs, but humanity will create better ones"
in a 2019 NYT interview : While he told the newspaper he thought AGI could bring a huge amount of wealth to the people, he also admitted that it may end up ushering in the apocalypse.
in a more recent interview : "The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term," the CEO told the NYT in a more recent interview, adding that we have enough time to get ahead of these problems.
His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people through a universal basic income. (...) he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
Sam Altman forecasts that within a few years, there will be a wide range of different AI models propagating and leapfrogging each other all around the world, each with its own smarts and capabilities, and each trained to fit a different moral code and viewpoint by companies racing to get product out of the door. "The only way I know how to solve a problem like this is iterating our way through it, learning early and limiting the number of 'one-shot-to-get-it-right scenarios' that we have," said Altman.
Sam Altman, the C.E.O. of OpenAI, recently told ABC News, “I’m particularly worried that these models could be used for large-scale disinformation.” And, he noted, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.” He added that “there will be other people who don’t put some of the safety limits that we put on,” and that society “has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
And last: "Nobody is launching runs bigger than GPT-4 for 6-9 months anyway. Why? Because it needs new hardware (H100/TPU-v5 clusters) anyway to get scale above that which are.. 6-9 months away after being installed, burned in, optimised etc" says Emad Mostaque, CEO at Stability AI, the open source generative AI company behind Stable Diffusion, a famous text-to-image AI model
Previous newsletters:
Superhuman: see how much work a professor did in 30 minutes with AI, + more gems about the future
Tesla does something Bill Gates said wasn’t possible, AI lets you talk to younger self
That's it for this week :)
If you made it until here, well, thanks a lot for reading this newsletter! A very simple way to encourage me to continue doing this is to take a few seconds to:
transfer this to one curious friend, or share it on Whatsapp clicking here
click on the little star next to that email in your mailbox
click on the heart at the bottom of that email
Thank you so much in advance! 🙏
Here to subscribe to make sure you get the future editions if this one was forwarded to you.
More about me
I cofounded KRDS right after college back in 2008 in Paris, we now also have offices in Singapore, HK, Shanghai, Dubai and India, we're one of the largest independent digital agencies in Asia. More here.
Watch our latest game showreel: At KRDS, we take pride in designing and developing games from scratch for brands and organizations, big and small! Gamification has always been part of our DNA, since our early days creating viral apps on Facebook back in Paris as the very first Facebook marketing partner outside of the USA!
I launched 2 sister agencies:
OhMyBot.net, dedicated to designing and building chatbots (watch the video case study for a chatbot campaign we ideated and developed for Clean & Clear: The Teen Skin Expert)
The WeChat Agency for the Chinese market (the Government of Singapore Investment Corporation is a client)
I also write op-eds and do podcasts at times. Here are my latest articles and podcasts
For the French speakers:
I’ve written more than 50 articles on the future of technology over the past years, all can be found listed here.
This newsletter has a French version with slightly different content: Parlons Futur
Have a great week ahead :)
Thomas