🚀 See what that AI did after reading fMRI scans; why humanoid robots are coming of age, & more
AI discovers new antibiotic & more
Hi,
This is Thomas, Cofounder and CEO of digital agency KRDS (more about me at the end).
You're receiving Future Weekly, my personal selection of news about some of the most exciting (and sometimes scary) developments in technology 🤖 summarized as bullet points to help you save time and anticipate the future 🔮.
First, you'll find small bites about many different news, and then further down these summaries:
See how long it took these previous general-purpose technologies to change the economy.
A few things to know about the chip that powers ChatGPT
Check that good news about our fight against climate change
Almost all the big names in AI say we should be worried about human extinction by AI, should we?
Why are humanoid robots coming of age now?
How the best language-based AI tools compare with one another, by Wharton professor Ethan Mollick (ChatGPT/BingAI/Claude/Bard)
Find out about these unexpected similarities between how our brain and artificial neural nets work!
Small Bites
Great new demo: deform any object/person in an image instantly just using your mouse (source)
People fooled by ChatGPT: “One of my favorite examples here: people are walking into libraries asking to check out books that don’t exist because they’ve asked for a list of books.”
Meta’s new AI models can recognize and produce speech for more than 1,000 languages and understand 3000 more! (MIT Tech Review)
Meta says it halves the error rate of OpenAI’s Whisper
They trained it on two new datasets: one that contains audio recordings of the New Testament Bible and its corresponding text taken from the internet in 1,107 languages, and another containing unlabeled New Testament audio recordings in 3,809 languages.
Meta open-sources multisensory AI model that combines 6 types of data (source)
You could describe a rainforest with text and it’d be able to visualise it, create the sound of rain, understand its depth, map thermal imaging, and appreciate motion readings.
In a blog post, Meta notes that other stream of sensory input could be added to future models, including “touch, speech, smell, and brain fMRI signals.
China Overtakes Japan As The World’s Biggest Exporter Of Passenger Cars (source)
Europe is more economically exposed to China than America is. Some 8% of publicly listed European firms’ revenues are from China, compared with 4% for American ones (The Economist)
British Telecom to cut 55,000 jobs with up to 20% replaced by AI (BBC)
"Whenever you get new technologies you can get big changes," said CEO Philip Jansen.
CEO said "generative AI" tools such as ChatGPT - which can write essays, scripts, poems, and solve computer coding in a human-like way - "gives us confidence we can go even further".
Be polite with AI, just in case, you know... ;)
Really cool Coca-Cola video ad done with the help of Generative AI
Scientists use AI to discover new antibiotic to treat deadly superbug (The Guardian)
Researchers used an AI algorithm to screen thousands of antibacterial molecules in an attempt to predict new structural classes. As a result of the AI screening, researchers were able to identify a new antibacterial compound
“Using AI, we can rapidly explore vast regions of chemical space, significantly increasing the chances of discovering fundamentally new antibacterial molecules,” he said.
AI methods afford us the opportunity to vastly increase the rate at which we discover new antibiotics, and we can do it at a reduced cost.
Finding this interesting? Share it on Whatsapp, just click here ❤️
If yes, feel free to take 3 seconds to forward that newsletter to one person, or share it on Whatsapp clicking here, I'd be immensely grateful 🙂
If that email was forwarded to you, you can click here to subscribe and make sure to receive future editions in your mailbox (many CEOs and startup founders are subscribers)
More to chew!
The most powerful new tech took time in the past to change an economy (The Economist)
James Watt patented his steam engine in 1769, but steam power did not overtake water as a source of industrial horsepower until the 1830s in Britain and 1860s in America.
In the case of electrification, the key technical advances had all been accomplished before 1880, but American productivity growth actually slowed from 1888 to 1907.
Nearly three decades after the first silicon integrated circuits Robert Solow, a Nobel-prize winning economist, was still observing that the computer age could be seen everywhere but in the productivity statistics. It was not until the mid-1990s that a computer-powered productivity boom eventually emerged in America.
The chip that powers ChatGPT
Nvidia’s A100 chip
26,7cm long, 11,1cm wide
$10,000 each
UBS analysts estimate an earlier version of ChatGPT required about 10,000 such chips. (10,000*$10,000=100 millions de $)
The A100 also has the distinction of being one of only a few chips to have export controls placed on it because of national defense reasons. (CNBC)
The AI Boom Runs on Chips, but It Can’t Get Enough : ‘It’s like toilet paper during the pandemic.’ Startups, investors scrounge for computational firepower. (WSJ)
Good news: solar and battery manufacturing capacity is expanding so fast, now on track to meet the 2030 milestones set out in the IEA’s scenario for net zero CO2 emissions by 2050. (Bloomberg)
If we consider "Installed and announced manufacturing capacity" in terms of %age of "2030 levels needed in IEA net zero scenario":
Solar Photovoltaics went from 29% in 2021 of the target for 2030 to 165% at end of Q1 2023
Batteries went from 6% to 97% over same period.
Electrolyzers (that can split water in O2 and hydrogen using electricity): from 4% to 57%
Heat pumps: from 25% to 42%
Wind turbines: from 24% to 29%
Planners, policymakers and project developers should also have some confidence that manufacturing can scale to meet net zero’s 2030 milestones.
How the best language-based AI tool compare (Wharton professor Ethan Mollick)
Your AI connected to the Internet is going to be Microsoft’s Bing in Creative Mode (the purple screen lets you know it is in creative mode) which is GPT-4, but free and connected to the internet.
It is also weird. It has a personality and some other constraints that might make it harder to work with (again, like some people you might know).
So you will probably want an offline, less opinionated AI to work with on longer projects or exchanges. There are 2 good options.
You can use GPT-4 (which you can get through ChatGPT Plus, for a fee), which is the most powerful model available and has a fairly calm, neutral personality.
Or else you can use Anthropic’s Claude, which is not quite as powerful, but has a longer memory and a remarkably pleasant personality (yes, this sounds weird, but trust me, you will know it when you see it).
Google’s Bard is very hit-or-miss, even after its updates, so I would skip it for now, though hopefully that will chang
Humanoid Robots Are Coming of Age (Wired)
There are already plenty of warehouse and manufacturing robots out there that use wheels rather than legs. And warehouses can be designed to make clever use of more conventional automation like conveyor belts.
But Melonee Wise, Agility’s CTO, says there are many situations where legs are far superior, especially at companies that cannot afford to entirely remake their operations around automation.
Why now?
More advanced computer vision, made possible through developments in machine learning over the past decade, has made it a lot easier for machines to navigate complex environments and do tasks like climbing stairs and grasping objects.
More power-dense batteries, produced as a result of electric vehicle development, have also made it possible to pack sufficient juice into a humanoid robot for it to move its legs quickly enough to balance dynamically—that is, to steady itself when it slips or misjudges a step, as humans can.
Humanoid robots can more easily navigate stairs, ramps, and unsteady ground; squeeze into tight spaces; and bend down or reach up as they work, Wise says. She’s a recent convert to team humanoid, and was until recently CEO of Fetch Robotics, which makes wheeled warehouse robots.
Brett Adcock, Figure’s CEO, reckons it should be possible to build humanoids at the same cost of making a car, providing there is enough demand to ramp up production.
Almost all the big names in AI say we should be worried about human extinction by AI, should we?
The signed statement is short: « Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. »
Among the signatories:
2 of the 3 “godfathers” of deep learning: Geoffrey Hinton, who left Google a few weeks back, and Yoshua Bengio
Deepmind cofounders, including its CEO Demis Hassabis
Open AI CEO Sam Altman and its Chief Scientist Ilya Sutskever, the mastermind behind ChatGPT
Signed also by more than 300 researchers and public figures.
Among those who did NOT sign:
Yann LeCun, the third “godfather” and director of AI research at Meta/Facebook, who couldn’t agree less with the statement:
I disagree. AI amplifies human intelligence, which is an intrinsically Good Thing, unlike nuclear weapons and deadly pathogens.
We don't even have a credible blueprint to come anywhere close to human-level AI. Once we do, we will come up with ways to make it safe.
Andrew Ng, Google Brain cofounder, who said in response:
When I think of existential risks to large parts of humanity: * The next pandemic * Climate change→massive depopulation * Another asteroid
AI will be a key part of our solution. So if you want humanity to survive & thrive the next 1000 years, let’s make AI go faster, not slower.
Juergen Schmidhuber, recognised by some as “the father of modern AI”, said earlier that says his life’s work won't lead to dystopia.
So whall we worry? Hard to say, the best way to approach that debate I’ve found so far:
The anti-doom argument is:
“We will design/engineer AGI (Artificial General Intelligence) to be safe; we will test it, monitor it, install safeguards; we will have multiple AGIs and they will police each other, etc.”
“Believing doom = believing all of these will fail = conjunction of many special arguments.”
The doom argument is:
“You can't engineer safe AGI; it will outsmart you, trick you, evade your safeguards; if you have multiple AGIs they will collude against you, etc.”
“Believing non-doom = believing you will succeed at all of these = conjunction of many special arguments.”
Each side believes that the *other* side has a weird conjunction of many dubious arguments, so each side thinks that their position is the normal default thing and that the *other* side has made an extraordinary claim that requires extraordinary evidence.
Find out about these amazing similarities between how our brain and artificial neural nets work! (The Economist)
For sure artificial neural networks are only very crude emulations of the way our brain works, but still, researchers have uncovered unexpected parallels that will blow your mind!
The seminal study comparing brains and Artificial Neural Networks (ANNs) was published in 2014.
The researchers compared what was going on inside the electronic network to what was happening inside the brains of macaque monkeys that had been set the same task of picking out objects from photographs, and whose brains had been wired with electrodes.
They found arresting parallels between how the monkeys represented images and how the computers did.
“The paper was a game-changer,” says another professor at MIT, “The [artificial] network was not in any way designed to fit the brain. It was just designed to solve the problem and yet we see this incredible fit.”
A paper published in 2022 found that an ANN trained on image-recognition tasks produced a group of artificial neurons devoted to classifying foodstuffs specifically.
When the paper was published there was, as far as anyone knew, no analogous area of the human visual system.
But the following year researchers from the same laboratory announced that they had discovered a region of the human brain that does indeed contain neurons that fire more often when a person is shown pictures of food.
In another experiment from 2022, a pair of neuroscientists fed an ANN trained to recognise images with data recorded by an MRI scanner examining human brains.
The idea was to try to let the ANN “see” through human eyes.
Sure enough, the ANN was able to interpret data from any of the hierarchical layers of the biological visual system—though it did best with data from the higher levels, which had already been partly processed by the brain in question.
If the computer model was shown brain activity from a human that was looking at a picture of a particular dog, for example, then it would say that it was looking at that particular dog—as opposed to some other object—almost 70% of the time.
The fact that a silicon brain can happily accept half-chewed data from a biological one suggests that, on some level, the two systems are performing the same sort of cognitive task.
That insight might prove useful for brain-computer interfaces:
An ANN linked up to a camera, for instance, might be used to feed partly processed visual information into the brain.
That might help treat some forms of blindness caused by damage to the brain’s visual system.
In a paper published in May 2023, other researchers used an ANN to monitor brain signals from participants in an MRI scanner.
Using just data from the MRI, the ANN could produce a rough summary of a story that the test subject was listening to, a description of a film they were watching, or the gist of a sentence they were imagining.
And even more recently, an AI Recreated Videos People Watched Based on Their Brain Activity (source, scientific paper with examples of videos)
The resulting system was able to take fresh fMRI scans it hadn’t seen before and generate videos that broadly resembled the clips human subjects had been watching at the time.
While far from a perfect match, the AI’s output was generally pretty close to the original video, accurately recreating crowd scenes or herds of horses and often matching the color palette.
Previous newsletters:
That's it for this week :)
If you made it until here, well, thanks a lot for reading this newsletter! A very simple way to encourage me to continue doing this is to take a few seconds to:
transfer this to one curious friend, or share it on Whatsapp clicking here
click on the little star next to that email in your mailbox
click on the heart at the bottom of that email
Thank you so much in advance! 🙏
Here to subscribe to make sure you get the future editions if this one was forwarded to you.
More about me
I cofounded KRDS right after college back in 2008 in Paris, we now also have offices in Singapore, HK, Shanghai, Dubai and India, we're one of the largest independent digital agencies in Asia. More here.
Watch our latest game showreel: At KRDS, we take pride in designing and developing games from scratch for brands and organizations, big and small! Gamification has always been part of our DNA, since our early days creating viral apps on Facebook back in Paris as the very first Facebook marketing partner outside of the USA!
I launched 2 sister agencies:
OhMyBot.net, dedicated to designing and building chatbots (watch the video case study for a chatbot campaign we ideated and developed for Clean & Clear: The Teen Skin Expert)
The WeChat Agency for the Chinese market (the Government of Singapore Investment Corporation is a client)
I also write op-eds and do podcasts at times. Here are my latest articles and podcasts
For the French speakers:
I’ve written more than 50 articles on the future of technology over the past years, all can be found listed here.
This newsletter has a French version with slightly different content: Parlons Futur
Have a great weekend :)
Thomas