🚀 World's fastest shoes, machine reads mind at a distance, wear this sweater to escape IA, image-to-music, & more
Amazing text-to-video, prompt battles, AI creates "drone" shots from your phone footage" & more
Hi,
This is Thomas, Cofounder and CEO of digital agency KRDS.
You're receiving Future Weekly, my personal selection of news about some of the most exciting developments in technology summarized as bullet points to help you save time and anticipate the future .
First, you'll find small bites about many different news, and then further down summaries of these 4 articles:
What we need to do now that global warming cannot be limited to 1.5°C (from The Economist)
How we’re now deep in the autonomy winter when it comes to self-driving cars
The world's fastest shoes promise to increase your walking speed up to 11km/h
How scientists have been able to read minds from a distance for the first time!
Small Bites
New fun AI tool: upload a source image, suggest a word, and the tool will update the image under the influence of the word
share a pic of a piece of watermelon, enter "lamp", and it will generate a lamp with an air of watermelon
Many more examples here
This sweater developed by the University of Maryland is an invisibility cloak against AI. It uses "adversarial patterns" to stop AI from recognizing the person wearing it. See the AI getting confused in that 1-min video
There were Rap Battles, now we have "Prompt Battle" parties: "It’s like a rap battle but with keyboards and DALL-E access
Same fierce competition and vibrant energy as a real battle" (DALL-E is a text-to-image AI tool by OpenAI)
New AI tools to create original avatar pics are all the rage:
for instance, profilepicture.ai: Upload at least 10 photos, Our AI will start training for up to 3 hours, and for $34 Get more than hundred new profile pictures (Works for humans, cats and dogs, Full HD quality 2048x2048)
Rewind: "the search engine for your life", a macOS app that enables you to find anything you’ve seen, said, or heard. this tech is only possible now thanks to the latest Apple chip
And yet another promising AI tool: it creates key insights of a podcast episode with short AI-generated audio summaries and lets you deep dive into the parts of the episode you find most interesting.
Similar: assemblyai.com "automatically summarizes audio and video files at scale with AI": for instance, a 30-min audio is summarized down to 5 bullet points, a paragraph, a title or 3 words 😱
I haven’t tested it, but even if imperfect, think about where we’ll be in 5 years!
This tool can generate music from an image. One example. Similar idea, someone on Twitter suggested "Put in the book, get a soundtrack that plays while you read it, inspired by the content."
Example of use of the text-generation AI tool GPT-3 in a Google Spreadsheet: IA takes name of a guest from column 1, something to mention from column 2, and generate a bespoke text for a thank you card in column 3 🤨 (source with other examples)
This tech will create "drone" shots from your phone footage (the 17-sec demo)
Tech analyst Benedict Evans one the metaverse :
Making a device better does not necessarily make it universal. Most obviously, we’ve been applying Moore’s Law to games consoles for 40 years or so, and they’ve got a lot better but most people don’t care.
A Playstation 5 is objectively amazing, but the global installed base of games consoles is flat at only about 175m units and it should now be clear that adding even better graphics- another decade of Moore’s Law - isn’t going to change that. Most people simply aren’t interested in that kind of experience no matter how much Moore’s Law Sony and Microsoft throw at them.
VR demos of industrial designers or heart surgeons looking at 3D models are cool, but most people’s work isn't in 3D either
We can’t know whether the metaverse will gain mass adoption in advance. A lot of very clever people did not realise that mobile would replace PCs as the centre of tech, so check back in a decade to find out. But the test is that for VR and AR to matter, we need to do things where 3D matters, whereas mobile did not have to create mobile things.
Google’s text-to-video AI is amazing : see the video it generated for "A happy elephant wearing a birthday hat walking under the sea"
Not there yet : AI interpretation of "Salmon in a river" (source)
Finding this interesting? ❤️
If yes, feel free to take 3 seconds to forward that newsletter to one person, I'd be immensely grateful 🙂
If that email was forwarded to you, you can click here to subscribe and make sure to receive future editions in your mailbox (many CEOs and startup founders are subscribers)
More to chew!
The Economist had this headline last week, "Goodbye 1.5°C" : Global warming cannot be limited to 1.5°C: The world is missing its lofty climate targets. Time for some realism
The world is already about 1.2°C hotter than it was in pre-industrial times. Given the lasting impact of greenhouse gases already emitted, and the impossibility of stopping emissions overnight, there is no way Earth can now avoid a temperature rise of more than 1.5°C.
There is still hope that the overshoot may not be too big, and may be only temporary, but even these consoling possibilities are becoming ever less likely.
Overshooting 1.5°C does not doom the planet. But it is a death sentence for some people, ways of life, ecosystems, even countries.
If the rich world allows global warming to ravage already fragile countries, it will inevitably end up paying a price in food shortages and proliferating refugees.
The world needs to be more pragmatic, and face up to some hard truths:
1: cutting emissions will require much more money. Global investment in clean energy needs to triple from today’s $1trn a year, and be concentrated in developing countries, which generate most of today’s emissions.
2: fossil fuels will not be abandoned overnight
3: 1.5°C will be missed, greater efforts must be made to adapt to climate change.
Fortunately a lot of adaptation is affordable. It can be as simple as providing farmers with hardier strains of crops and getting cyclone warnings to people in harm’s way.
This is an area where even modest help from rich countries can have a big impact. Yet they are not coughing up the money they have promised to help the poorest ones adapt.
That is unfair: why should poor farmers in Africa, who have done almost nothing to make the climate change, be abandoned to suffer as it does?
Finally, having admitted that the planet will grow dangerously hot, policymakers need to consider more radical ways to cool it. Technologies to suck carbon dioxide out of the atmosphere, now in their infancy, need a lot of attention. So does “solar geoengineering”, which blocks out incoming sunlight. Both are mistrusted by climate activists, the first as a false promise, the second as a scary threat. On solar geoengineering people are right to worry. It could well be dangerous and would be very hard to govern. But so will an ever hotter world.
We’re now deep in the autonomy winter.
Tech analyst Benedict Evans :
The first wave of machine leaning, from 2013 onwards, made a lot of people think that this could actually work, and it certainly got us 90% of the way there, but it now seems fairly clear that having a car with no steering wheel that can drive across the country might be generations away, and will certainly take longer and cost far more than people hoped
Argo, the Ford/Volkswagen autonomy Joint Venture with a $1 billion investment is shutting down (Techcrunch, engadget).
Ford said that it made a strategic decision to shift its resources to developing advanced driver assistance systems, and not autonomous vehicle technology that can be applied to robotaxis.
"Profitable, fully autonomous vehicles at scale are a long way off and we won’t necessarily have to create that technology ourselves,”"
Ford CEO Jim Farley . "It's estimated that more than a hundred billion has been invested in the promise of level four autonomy," he said during the call, "And yet no one has defined a profitable business model at scale."
In short, Ford is refocusing its investments away from the longer-term goal of Level 4 autonomy (that's a vehicle capable of navigating without human intervention though manual control is still an option) for the more immediate short term gains in faster L2+ and L3 autonomy.
L2+ is today's state of the art, think Ford's BlueCruise or GM's SuperCruise technologies with hands-free driving along pre-mapped highway routes
L3 is where you get into the vehicle handling all safety-critical functions along those routes, not just steering and lane-keeping.
"Commercialization of L4 autonomy, at scale, is going to take much longer than we previously expected," Doug Field, chief advanced product development and technology officer at Ford, said during the call. "L2+ and L3 driver assist technologies have a larger addressable customer base, which will allow it to scale more quickly, and profitability."
The World's Fastest Shoes Promise to Increase Your Walking Speed up to 11km/h (source)
Developed by a team of robotics engineers who spun off their work at Carnegie Mellon University into a new company called Shift Robotics
You don't need to know how to roller skate with the Moonwalkers, you just walk.
A strap-on design allows the Moonwalkers to be used with almost any pair of shoes, and each unit features an electric motor that powers a set of wheels similar to what you’d find on a pair of inline roller skates, but much smaller, and not all in a single line, so there’s no balancing required.
Sensors monitor the user’s walking gait while algorithms automatically adjust the power of the motors to match, synchronized between each foot, so the added speed increases and decreases as the user walks faster or slower.
battery-powered range of about 10 km
because they’re much smaller than an electric scooter or a bike, they’re easy to keep stashed at your desk or even in a backpack when not in use.
Full retail pricing for a pair of Moonwalkers is expected to be around $1,400.
Link to the Kickstarter campaign (they have raised almost 3X what they asked for)
A method appears to be the first to noninvasively reconstruct language from brain activity (source)
From the scientific paper itself:
Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech, and even silent videos, demonstrating that a single language decoder can be applied to a range of semantic tasks.
Past mind-reading techniques relied on implanting electrodes deep in peoples' brains. The new method instead relies on a noninvasive brain scanning technique called functional magnetic resonance imaging (fMRI) to reconstruct language.
This algorithm designed by a team at the University of Texas can “read” the words that a person is hearing or thinking during a functional magnetic resonance imaging (fMRI) brain scan.
“If you had asked any cognitive neuroscientist in the world twenty years ago if this was doable, they would have laughed you out of the room”
Using such fMRI data for this type of research is difficult because it is rather slow compared to the speed of human thoughts.
Instead of detecting the firing of neurons, which happens on the scale of milliseconds, MRI machines measure changes in blood flow within the brain as proxies for brain activity; such changes take seconds.
The reason the setup in this research works is that the system is not decoding language word-for-word, but rather discerning the higher-level meaning of a sentence or thought.
The algorithm was trained with fMRI brain recordings taken as three study subjects—one woman and 2 men, all in their 20s or 30s—listened to 16 hours of podcasts and radio stories.
However, it does have some shortcomings; for example, it isn’t very good at conserving pronouns and often mixes up first- and third-person. The decoder, says Huth, “knows what’s happening pretty accurately, but not who is doing the things.”
if that’s the main limitation, wow, that’s already mind-blowing!
notable from a privacy point of view is that a decoder trained on one individual’s brain scans could not reconstruct language from another individual. So someone would need to participate in extensive training sessions before their thoughts could be accurately decoded.
Sam Nastase, a researcher and lecturer at the Princeton Neuroscience Institute who was not involved in the research, says using fMRI recordings for this type of brain decoding is “mind blowing,” since such data are typically so slow and noisy.
Since the decoder uses noninvasive fMRI brain recordings, it has higher potential for real-world application than do invasive methods, though the expense and inconvenience of using MRI machines is an obvious challenge.
Wow : the results reveal which parts of the brain are responsible for creating meaning. By using the decoder on recordings of specific areas such as the prefrontal cortex or the parietal temporal cortex, the team could determine which part was representing what semantic information.
The last newsletter: The biggest digital change since crypto; Steve Jobs resurrected; change your voice in 2 sec; AI creates a comic book & more
That's it for this week :)
If you made it until here, well, thanks a lot for reading this newsletter! A very simple way to encourage me to continue doing this is to take a few seconds to:
share this with a curious friend
click on the little star next to that email in your mailbox
click on the heart at the bottom of that email
Thank you so much in advance! 🙏
Here to subscribe to make sure you get the future editions if this one was forwarded to you.
More about me
I cofounded KRDS right after college back in 2008 in Paris, we now also have offices in Singapore, HK, Shanghai, Dubai and India, we're one of the largest independent digital agencies in Asia. More here.
I launched 2 sister agencies:
OhMyBot.net, dedicated to designing and building chatbots
The WeChat Agency for the Chinese market
I also write op-eds and do podcast at times. Here are my latest articles and podcasts
For the French speakers, I’ve written more than 50 articles on the future of technology over the past years, all can be found listed here.
Just in case, this newsletter has a French version with more content : Parlons Futur
Have a great day :)
Thomas