Even though we've been reporting on AI since our very first newsletter (in September 2020⏱️✈️), writing about developments in AI these days is different. It feels like constantly walking a fine line between sharing enthusiasm for what's unfolding while staying away from the hyped headlines that dominate the news. We want to take a long view, but still stay close to the cutting edge; if only for an amusing look at what we thought was happening when we re-read in a few years time.
Two developments, happening as we speak, feel like the dawn of a new phase: multi-modality and the first research publications on impact on our working lives.
ChatGPT (and its undercover Microsoft sibling Bing Chat) is now able to take in images and voice. Think of showing the contents of your fridge, asking it instructions to cook a meal. Or showing a photo of your bike, asking it how to raise your seat. The other way around now also enters a new phase: image creation will integrate with the chatbot. You will also be able to interact with the model verbally, making a spoken conversation possible. All of these developments will lower friction and barriers to entry, making the tools useful for an even broader audience.
In the research department, the paper that struck me most is a first validation of the claim that AI would improve the speed and quality of knowledge work. You can read the full paper here or read a great overview by one of the authors, Ethan Mollick. Enlisting over 750 knowledge workers from BCG (a management consultancy) and having them complete realistic client engagement exercises with and without the use of GPT-4, both speed and quality increased significantly for the GPT-4 group.
An interesting first answer also formed to the question which worker segment would benefit most from the augmentation. Would AI be a 'kingmaker' (boosting the highest performers), an 'escalator' (boosting all) or a 'leveller' (boosting lower performers, closing the gap). The last option seems to be what the research shows:
"Where previously the gap between the average performances of top and bottom performers was 22%, it shrunk to a mere 4% once the consultants used GPT-4."
Seeing the landscape evolve rapidly before our very eyes, I am constantly surprised by new creative use cases. If AI can turn a conceptual drawing into working software, the proof is building that AI can supercharge our production power, becoming the next incarnation of our bicycle for the mind. Which leads me to the question I am most eager to see answered: if we have magical realisation powers at our fingertips, will we spend more time thinking about what we want to realise in the first place? Will AI lead to even more busywork, or will it lead to more purpose, more intent?
Just when you think you've complete understanding of how something works, you get new information that shows that in actual fact... it's slightly more complicated. Recognisable? It can always get more complicated, which honestly speaking is exactly what yours truly thinks makes life so interesting and fun 😎.
A couple of weeks ago, we spoke about the different modes of thinking triggered by either listening to or reading information. If we focus on the latter, it turns out it actually matters how you're reading. Are you reading digitally or traditional analog style, i.e. a physical piece of paper?
Discussing the topic last week, my co-author pointed me towards an article in Dutch newspaper 'De Volkskrant'. It covers that schools in Sweden that were on the forefront of digitalising education, are now making a move back to physical books. Reason: average literacy and levels of understanding what you read have dropped since introducing laptops, notebooks and other digital tools to the classroom.
This trend is backed up by a Unesco research report referenced in the article that concluded that [translated from Dutch]:
"...digital methods can never replace human contact, teaching, explanation and help by teachers. Tablets, smartphones and laptops do not always contribute to the ‘wellbeing’ of students."
There is much more proof that reading from paper leads to better understanding of the text, especially longer pieces of text, compared to reading it from screen. Still, technology should be present in modern schools and is very useful. The trick (again): finding the right balance, and especially finding the right application for applying each technology, whether modern or old-fashioned. If you're able to teach students how to make those decisions, I will not be worried about their ability to further their understanding of our complex world.
Having had a couple of odd (sometimes, simply bad) experiences with providing feedback to recent graduates, I set myself to the task of analysing how this could have happened. I'm pretty sure everyone wants to learn and feedback plays a key, yet often uncomfortable (on the receiving end) role. The way I've learned to provide positive and negative feedback (and have received it from the older generation) seems not very effective anymore.
Even with 'the gloves on', I noticed that any negative feedback became a great source of concern and insecurity for the persons I was talking to. They started doubting whether they were good enough for the job or whether they were in the right place at all. When you consistently receive negative feedback, I can understand that these thoughts may start to cross your mind. But after one session?
Discussing the issue with friends and family, some themes started to emerge. First of all, the content of the message was not the issue. The feedback itself had its merits. Secondly, the recipients seemed to be caught off-guard and not knowing what to do with the situation. As if this was the first time they encountered a somewhat difficult one-to-one. This triggered some further thinking.
Could the way we interact personally with each other be so different between the generations that it actually triggers all kinds of unintended side-effects? Current graduates use far more tools such as social media and direct-messaging apps that do not require real-life personal interactions; seeing a facial expression combined with the words makes the message completely different than just read from a computer screen (or listened to 😉). I'm not saying this is necessarily wrong, I just observe that this may make life quite challenging. After all, older people tend to be the teachers of the younger ones.
Still, I feel there is room for some rebalance. The finesses of personal interaction play a vital role in our society and need to be practised. How often does a child actually go to somebody to ask a question, let alone make a simple phone call (still not the facial expression, but at least some tone to the words)? Practice makes perfect, but if you do not get to practice that often anymore, what happens?
I also believe this topic is connected to our earlier observation about the lack of focus on resilience - our capacity to recover swiftly from adversity (see the obvious link?).
Processing all this, I keep on coming back to the conclusion that more personal interaction is the solution. What would this interaction in today's world look like? Is it through social media after all and do I need to finally make that jump (I'm extremely reluctant to do that) or can we find alternatives? We need the intimacy, one way or the other. Or both.
The intriguing insight we highlighted before that our thinking is influenced by which mode of communication we use, triggered me to learn more. Last year we quoted:
"Consequently, we propose that people think more intuitively in the spoken modality and more analytically in the written modality."
But what happens if it is the other way around. In other words, does it matter to what you think when you're on the receiving end of different modes of information? Apparently it does and much in the same way as when you're the transmitter of information.
In the article 'Do you think more clearly when reading or when listening', evidence is presented that generally speaking different brain circuits are in action depending on whether you listen to or read information. Listening tends to invoke intuitive thinking, relying on gut feelings and instincts that come (and go) without much effort. Reading information tends to trigger analytic thinking, taking time to evaluate all arguments and evidence before reaching a conclusion.
One explanation for this behaviour lies in the process how we have learned to speak and read. Generally, we learn to speak a language by listening carefully and responding in a spontaneous, trial-and-error kind of way. A very intuitive way. Learning to read is much more organised, according to a set of rules and a lot of practice. Hence, different mental processes are in play when learning to speak compared to learning to read.
"Because of their experience with learning and practising reading while growing up, people may become conditioned to thinking relatively analytically when they read and get accustomed to putting in a bit more mental effort, compared with when they listen."
I believe these insights could have significant implications. You could for instance force yourself to both listen to and read information about a topic you need to take an important decision about. Partly, we naturally tend to do this already. Being social animals and facing big decisions, we often query our friends and family and listen to their experiences and advice. Internally, we try to combine those inputs (which you would have digested intuitively, mostly) with spending hours surfing the internet, reading magazines, books and other literature.
Is the combination of the analytical and intuitive thought processes the holy grail? Or, does this depend on the subject matter? What does this tell us about the advice to listen to our gut feeling?
The trick will be figuring out when to use which modality and in what balance. That might be the ultimate trait. I'd say, back again to your intuition and gauge what feels right, when 😉!
Not just looking at the functional aspects of products and services, but looking at the emotional (how does it make me feel?) and social (how does it make me look?) factors as well, has got to be my favourite business tool to date. As more and more research proves how much of our decision-making is guided by our subconscious, looking beyond the functional aspects is a key skill to finding and growing your customer base. Beware, this goes way beyond the marketing and sales trickery of adding the right atmosphere and association, like for instance Coca Cola has been doing successfully for decades on end. No, this is all about baking these elements right into the entire customer experience.
A compelling example is the rise in vinyl record sales that has been going on for the past 10+ years. If you look at the functional aspects of playing music, vinyl records lose to streaming services by a mile. They scratch, are hard to take with you, hold typically only 22 minutes of music per side and never auto-play to a next song when done. Spotify holds almost all music you can imagine, mix-tapes (playlists) are created with one click, you can take them anywhere on your phone and sound quality is generally quite good. Easy choice, right?
The other day, friends explained how they got back into vinyl records, growing their collection, loving the intentionality of taking out a record and making the time to play it. Like a Japanese Tea ceremony, it's not about being thirsty. It's a ritual, a habit that you tie into taking time for yourself and appreciating art.
I once had a wonderful conversation with a dear friend on another role of book- and record collections. If you're old enough, you will remember how peeking at someones record collection or book shelves made you instantly aware of the owners interests and tastes. A joint interest for an artist, a specific record or a book, immediately sparked conversation, an easy gateway into getting to know one another.
To be honest, I largely ignored these non-functional aspects of owning, playing and exposing your music. I jumped on the digital and streaming bandwagon as soon as it gained real world usability. But now that I'm fully converted for almost two decades, I can appreciate the appeal. A mix of "you don't know what you got 'till it's gone" and basic economic theory (the law of diminishing returns, scarcity increasing the value of some non-functional aspects) is probably a driving force behind the re-discovery that is going on.
Spotting 'Spotify walls' on Pinterest is another clue of the gap that is clearly there. A quick search immediately finds tons of products that tie the digital realm to the physical. I love how a flurry of entrepreneural minds spotted the non-functional needs beyond the nostalgia and started building stuff. Every know and then, progress is taking a step back.
Last week, we spent some thoughts on AI technology that might augment your capabilities to read other people's feelings, and how this technology might make you feel. Once you conquer the stage of feeling threatened and accept the new tools, you may become dependent on it and ruin your own intuition. Use it or lose it, right?
In a lecture I attended a few weeks back, AI specialist Phanish Puranam shared an intriguing example of the pitfalls that might be lurking. In large law firms, multiple junior lawyers perform most of the prep work for the senior lawyers. They dig through stacks of documents to summarize facts, find clues and form suggestions for approaching the case. Large Language Models like GPT-4 promise case analysis at high speeds and low costs, potentially reducing the reliance on juniors. If you go down this road, however, you are destroying your talent pipeline. Building your senior lawyer experience and intuition -our own Machine Learning model- requires grinding away at case material for a number of years. As Einstein once said:
"Intuition is nothing but the outcome of earlier intellectual experience"
Since the junior work needed to be done in any case, the learning almost came as a by-product. Now, the learning must become intentional, and the process re-considered as an investment. New equations will come with new optimal solutions, all part of the impact we can't even envision yet.
An interesting part of the puzzles before us, is the level of dependence we are comfortable with. As investigated before, we are already dependent for much of our lives, which is apparently fine for most of us. On the other hand, if my car's navigation is sending me in a direction I feel uncomfortable with, I am glad I have a sense of direction and basic geographical knowledge.
It's up to us to define the fine line between using technology as a bicycle for the mind and leaning on it like a crutch. Interesting times.
After seeing the documentary 'The Social Dilemma' and writing about it way back in Q&A 003, I half-joked to my two sons once about them getting a mobile phone when they would turn 18. A few weeks back, I overheard them repeating this exact statement to one of their friends, leading to a heartfelt 'really?' from their pals. Now that they are almost turning 9 years old, mobile phones are apparently becoming an issue, which means we have to start defining a strategy.
Fully realising the addictive nature of modern smartphones and the suite of applications inhabiting it, I contemplated our responsibility as parents in relation to potentially harmful substances and experiences in general. What makes us protect our kids from certain elements up to a certain age?
My working hypothesis right now, is that we seek to shield our kids from anything they are not yet able to handle themselves. Be it physically or emotionally, we try to estimate the moment when certain experiences might still be challenging, but not harmful anymore. This hypothesis made me think of the classic "comfort-stretch-stress" distinction, in which alternating between comfort and stretch will provide you with growth and learning, while stress can cause adverse effects. For your kids, you aim for learning and growth, not trauma.
I ended up concluding a 0/1 kind of decision is probably not the best solution. Analogous to alcohol, being allowed to switch from soda to coma-drinking at age 18 has its downsides. Adding extra dimensions to this puzzle could help. Usage limits, time limits, app limits, expanding freedom slowly might be the approach we're going for.
Discussing this subject did bring up several reasons for humility. Even as adults, we are sometimes not fully in control of our own mobile phone use. Checking for messages, inability to let a badge stay unchecked, doomscrolling on Twitter or YouTube. The strategies we will employ for our kids might warrant some introspection as well. We will keep you posted.
A recent tweet from Naval Ravikant is still processing, as I'm unsure what I actually think of it:
"The only real test of intelligence is if you get what you want out of life."
Measuring something by its results surely has its merits, as long as you measure results the right way. A famous instance of measuring something by its results is the Turing test, proposed by Alan Turing in his 1950 paper 'Computing Machinery and Intelligence'. As the question if machines can 'think' is quite difficult to define, Turing proposed to see if a conversation with a computer could be indistinguishable from a conversation with a human. If the human could not tell he's talking to a computer, it would pass the test.
One wonderful example of AI trying to pass this test is shown with Google Duplex, 4 years old already, but a great illustration of how computers are getting better at conversation. The AI can act as a personal assistant, handling the making of appointments for you.
The other day, Alex Tabarrok shared another great example of next-level AI interaction. Diplomacy is a 7-player game in which players must persuade, cajole, coordinate, strategize, bluff and lie to one another in order to take over the world. For the first time, an AI has achieved success in Diplomacy:
"Over 40 Diplomacy games with 82 human players involving 5,277 messages over 72 hours of gameplay, CICERO achieved more than double the average score of the other players and ranked in the top 10% of players!"
If you want to see the AI in action, check out professional Diplomacy player CaptainMeme commenting on an entire game on YouTube.
As a lot of companies get better at using technology for reaching us through messages and phone calls, I especially look forward to these AI assistants acting as a barrier to entry for all of us, fighting fire with fire while promoting real interaction.
The usefulness of cryptocurrencies and the blockchain technology features often on the Q&A discussion table. Though we see merit in the technology and numerous application, we're concerned about its environmental impact. Most of the current blockchain implementations consume a tremendous amount of energy. Additionally, we believe the technology is still in its early days and you need to be quite tech savvy to understand its workings. With most of its applications focused on creating new financial instruments, these new worlds have a tendency to become crowded with crooks.
However, like with any big innovation, time is required to get it adapted to what society really needs. The Economist recently reported on a seemingly big step forward in blockchain technology that is happening in the so-called 'Ethereum' network. On September 15th, it plans to change its algorithms in such a way that its network will reduce its power usage by 99%. This move, dubbed 'the Merge' (as it entails merging the current network with one that has been running in parallel in testing modus), will reduce Ethereum's energy consumption overnight by almost 100TWh, the energy usage of a small country like The Netherlands or Chile (see graph below from the Economist article).
"It can be surreal to watch this happening in real time. It is as if The Economist started to live stream its editorial meetings and allowed subscribers to commission articles and select covers."
This overhaul of software, that currently represents >$200bn in value, will be done with no downtime, has been tested for more than 2 years and was done by a decentralised group of 122 developers in 30 different countries. That is a huge achievement.
"More important still, the merge will, if successful, suggest that Ethereum has the capacity for self-improvement, opening the door to more sweeping changes."
I also wonder about the impact on power prices and global energy distribution. All in all, a big step forwards for blockchain technology. I'm more curious than before to see what it'll lead to.
When two of your favourite thinkers come together to do an interview, I take notice. Tyler Cowen of Marginal Revolution interviewed Marc Andreessen (of Netscape Browser fame & now partner at Andreessen Horowitz, a Venture Capital firm). Both of them have a penchant to take the long view and look at human development, with often interesting thoughts on creativity and innovation.
Prompted by Cowen what his seminal TV show for his intellectual development was in high school, Andreesen immediately responds with 'Knight Rider':
"There was a wave of these near science fiction shows in the late ’70s, early ’80s that coincided with . . . Some of it was the aftermath of Star Wars, but it was the arrival of the personal computer and the arrival of computer technology in the lives of ordinary people for the first time. There was a massive wave of anxiety, but there was also a tremendous sense of possibility."
Going down the rabbit hole one evening of watching analyses of the TV series, it struck me how much of the technology in Knight Rider is only now being realised. Self-driving, Artificial Intelligence, Sensor technology, Connected knowledge databases; none of this was part of our daily lives, when Commodore 64's and MSX home computers launched during the early years of the series (1982-1986).
Expanding the possible is often best done by describing the desired outcome, the Utopia so to speak. After that, adjacent possibilities have a tendency to fill the gap organically. Which brings me to one of my favourite questions: if current technology were no objection, what kind of Utopia would you wish for?
Tim Ferris' interview with Jane McGonical started with an entirely different subject: how to combat insomnia. Her answer was simple: play a videogame for 10 minutes.
This obviously goes against all the other advice that is out there to get a proper night sleep and therefore got my attention. We learned to reduce screen time and not engage in mind-occupying exercises before bed. It turns out that a certain subset of videogames seems to be very effective.
"Any visual pattern matching game where your brain is looking at colors, looking at shapes, looking at how things are arranged in space will be really good."
Researchers at Oxford University even found that playing Tetris can be used to mitigate post-traumatic stress disorder (PTSD). Apparently, playing the game for a short time results in a kind of visually overwriting of what could become a sort of compulsive repetitive cycle of imagery caused by a trauma. In other words, preventing unwanted flashbacks from traumatic events.
All avid videogamers, please hold your horses before cheering too loudly; it also turns out that just 10 minutes is really the right dose. Will you be able to control the reinforcing self-rewarding neurocircuitry?
Combining the power of AI with robotics is providing us with interesting views on the future of recycling, as the combination looks like a fruitful one for automating laborious efforts. The road to eradicating monotonous work that only humans could do in the past, proves increasingly likely.
This technological look-forward contrasted nicely with a return to older wisdom I encountered in the documentary The Biggest Little Farm on Netflix. If either Regenerative Agriculture or Epic Hero's Journeys spark your interest, this is an evening well spent.
While agricultural technology (synthetic fertilizers, pesticides, monocultures) created problems we spent the last century constantly fixing forward, the view of farming as depicted in the documentary resonates highly with me. Something about it just feels completely logical in the deepest sense possible.
Opponents of Regenerative Agriculture, however, rightfully claim that because of its extensive manual labour involved, this way of farming is too expensive. At least for a large part of our world's population. I might consider spending money on expensive vegetables because I'm convinced it's the best 'fuel in the tank' for my family, but the majority of households don't have this option.
And this is where I will abuse my writer status to dream a bit of an alternate horizon of technology. Instead of large fields of monocultured, genetically engineered crops, sprayed with pesticides to fend off predators, what if we could combine our ancient wisdom of sustainable farming ecosystems with robotics and AI to make it economically viable. Now that would certainly be a future I would look forward to.
When two seemingly unconnected fields of science come together, there is an odd change of an unique insight to surface. Geophysicists and archeologists worked together in Israel to date findings in a timeframe which is called the Hallstatt plateau. It is a period from 400-800 BC where, for reasons not entirely clear, the radiocarbon (C14) method for dating is not working properly. The Economist reports that scientists have been able to use residual magnetism to date findings.
Residual magnetism is a term used for materials that have lost their magnetism during intense heat and were re-magnetised by the earth's magnetic field when they cooled down. Minerals in pottery, iron tools and utensils that got caught in a big fire are examples.
Trusted accounts reported that Sennacherib, King of Assyria, had the city of Lachish in nowadays Israel, destroyed and burnt to the ground in 701 BC. Knowing this and finding residual magnetism on the spot allowed archeologists to date findings and shed a better light on this period, from which findings are generally hard to date.
Furthermore, being re-magnetised by the earth's magnetic field of that particular time, the findings also help geophysicists to understand how movements in Earth’s core change the planet’s magnetic field. This particular period was one "in which this field was usually about 50% stronger than it is today, and for short periods was double today’s strength".
As it turns out, by brutally destroying conquered cities, Sennacherib, like many other Assyrian Kings, was doing his bit for science. Luckily for the people living in that area, the Assyrians were succeeded by the Babylonians, followed by the Persians, who were, as the lead geophysicist in this research Mr Vaknin, observed: “kind enough not to destroy cities”. Thereby, also leaving valuable material and insights for generations to come. There are indeed multiple ways to Rome (even though it did not exist in those days).
There are some interesting developments in the financial world and I am not talking about inflation (which appears to be here to stay for a while longer). The Economist reported reported recently that the Venture Capital industry is experiencing enormous growth, which could have very positive side-effects.
The Venture Capital industry deploys about 2% of the world’s institutional assets. It has however been able to use this money wisely: currently, seven of the world's ten largest firms were once backed by Venture Capital. Somehow, they have found a good algorithm for investing in the innovations that the world wants (like search engines, electric cars) or needs (like mRNA vaccins).
With more than $450bn of fresh cash being invested in Venture Capital, the sector is being scaled up in an unprecedented way. It is spreading both over a wider range of industries and internationally. This expansion creates tensions and challenges, but also lots of opportunities.
"A larger pool of capital chasing a bigger universe of ideas will boost competition, and is likely to boost innovation, leading to a more dynamic form of capitalism".
The author of the Economist article concludes:
"Venture capital aims to take good ideas and make them bigger and better: it is only right to apply that logic to the industry itself."
I'm all for innovation and believe that especially our financial systems are in dire need of an upgrade. Question is, is this the right path? Will the potential benefits outweigh the ever-increasing wealth transfer to the already rich people as the current capitalist system is designed to do? What is the natural mechanism to restore the imbalance?
Facebook used to be a great idea. Connecting people, enabling communication, what's not to love? Combine that with a fast growing environment and you can see how flocks of coders and designers were attracted to the sprawling young company, laying the base for the empire it is today.
But the times, they are changing. Facebook is constantly in the crosshairs of public opinion and lawmakers, for what I think are good reasons. As growth became at odds with inspiring goals, the goals crumbled in favour of trying to sustain growth.
The mother company rebranded itself as 'Meta' last week, in an effort that many dismiss as an attempt to dissociate its activities with a toxic brandname. I considered the name change suggestions TNW proposed much more appropriate; 'Nothing Suspicious Going On Here' or 'Koobecaf' felt like worthy suggestions. But alas.
A soothing thought: progress will happen anyway. Companies can change over time, doing new things, reinventing itself. But this route is not the most obvious one, as the characteristics of companies often prevent re-invention. Big or small, companies tend to stick to their behaviour, even to the point of creating something completely unsustainable.
No, the route most often travelled is that of natural decay. Like Facebook taking the place of MySpace (remember?), Zuck's empire will one day make place for something new. The tiny buds, the fresh young green leaves of tomorrow are our safest bet.
The last decade, barring a few exceptions, has not seen countries waging war against each other to increase power or annex valuable resources. Looking beyond guns, tanks and airplanes invading another country, one recognizes immediately wars are being fought out between different nations across the globe; IT-systems being hacked, social media influence, and discriminatory immigration rules.
Perhaps the biggest one is the trade war. Governments around the globe are involved, sometimes colluding together, to basically promote the wealth of its own people above that of another country.
A good recent example with potentially far-reaching implications centers around the semiconductor industry. In our current days and age, semiconductors are crucial. However, not all countries have easy access to them. China -as shown in the chart below- is currently spending more on importing semiconductors than it does on oil.
The US and its usual allies have taken the opportunity to show their force, blocking technology companies like ASML from selling to China and disallowing trade of certain products.
With less opportunity to export and more than a million engineers graduating each year, it's a just a matter of time before China will completely be self-sufficient. By then, they have stimulated innovation such that their products will be dominant. Meanwhile, China's eyes are again focused on Taiwan, home of the largest semiconductor production company in the world TSMC, creating political and military tension in the region.
I've never really understood why we would engage in such economic (and political) warfare. Why we believe we deserve better than our neighbours or far-away brothers and sisters is beyond me. Are we not stuck on the wrong side of the prisoner's dilemma?
Free international trade could in the long-term increase both welfare and wellbeing (why is one written with just one 'l'?) for everyone, as long as we find a solution for its environmental impact. As the old Chinese saying goes: "if you can't beat them, join them".
The podcast market is experiencing enormous growth. According to 'the podcasthost', the number of different podcasts has grown at least 4x over the past 3 years. A podcast is attractive as it offers the ability to listen to your topic of choice when and where you want. It's basically fully customized to your own needs and a seriously good alternative to the repeating sequence of news items you normally encounter on a radio station.
I personally find myself using podcasts not only as relaxation time but also as time to learn and grow. I even discovered that programme duration is not an issue. I recently finished listening to a 6-leg series on the 2nd world war in the Pacific. Each episode lasted for at least 4-5 hours. The episodes were released over time while they were created; from the first episode to the last took more than 2.5 years. This did not bother me at all.
Anyone can create and post podcasts. With currently more than 3mln different podcasts, finding those that will entertain you is a challenge. Your interests and hobby's are a good starting point to find something suitable. On my personal list for instance are backgrounds to daily news (New York Times 'the Daily'), history (Dan Carlin's 'Hardcore history'), inspiration and optimism (Simon Sinek's 'a bit of optimism').
With the world opening up, there'll be plenty of time to listen to podcasts while commuting or on your way to friends and family. At home there are opportunities to listen as well. The lockdown period has learned me various new moments when you can put on your headphones; podcasts make great amusement whilst cooking, doing the dishes or the laundry. You may even start to look forward to doing those tasks!
The big privacy dance continues. Turning away from our usual target Google, it seems even companies known for having privacy in high esteem are not immune for governmental pressures. This week, Apple announced several new features that allow users to turn off physical and online activity tracking of their iPhones, iPads and other mobile equipment. However, digging into the smallprint, it becomes clear these features will not be made available in several countries, most notably China.
It's not the first time, Apple has had to made changes to their usual policies. Technology magazine Wired reported that in April of this year, the company added an extra setup step for newly-sold iOS devices specifically for customers in Russia.
"Alongside questions about language preference and whether to enable Siri, users will see a screen that prompts them to install a list of apps from Russian developers. It's not just a regional peculiarity. It's a concession Apple has made to legal pressure from Moscow."
Of course, governments could be promoting domestic software to their citizens for economic reasons or to safeguard national security. Still, it is fascinating to observe the dynamics and how a big company like Apple is able to find a middle ground. After all, Apple gives the choice to the customer rather than the government. It may not last, but I am confident that by that time, companies like Apple will find ways to offer their customers an alternative.
It's been a few wild few weeks for cryptocurrency. Most likely bowing to outside pressure, Elon Musk reversed his decision to accept Bitcoin as payment for buying a Tesla, citing the enormous environmental impact the current mechanism has (he knew this well before, of course...) and even floated the idea of selling some of its holdings. As of today, Bitcoin is consuming 0.66% of all worldwide electricity, surpassing the Netherlands in power usage.
While I share the environmental concerns, I think the basic premise of crypto-currencies does look promising. In a leader for their special report on the future of banking, The Economist provided an overview on the developments in government-based cryptocurrencies (Govcoins). Crypto could make finance accessible for the 1.7 billion people worldwide that currently lack bank accounts, which would mean great progress for a vast number of people.
Since Bitcoin is hardcoded to be limited in the number in circulation, it is also praised for hedging against inflation. This does have its downsides, since inflation is also a tool for making our economy function properly. The trade-off for individual wealth preservation might well be our collective economic well-being. Based on the current worldwide state of affairs, it's up to us to decide which way we'd like our balancing bike to swing.
In the article The Rise and Fall of Getting Things Done in the New Yorker, Cal Newport provides an extensive and useful summary of a century of productivity thinking and tools. It shows that the focus has shifted from the productivity of the entire company or a division to the productivity of the individual.
"The knowledge sector’s insistence that productivity is a personal issue seems to have created a so-called “tragedy of the commons” scenario, in which individuals making reasonable decisions for themselves insure a negative group outcome."
Since especially knowledge-workers require a great deal of autonomy, current day workers are managed by clear targets but receive hardly any instructions on how to achieve those. Certainly during the current lockdown, it is hard for co-workers to judge whether you are busy (enough).
It is, therefore, logical to recognize that productivity is neither entirely personal nor collective. It's a balanced system that needs to be monitored and improved continuously.
"Imagine if, through some combination of new management thinking and technology, we could introduce processes that minimize the time required to talk about work or fight off random tasks flung our way by equally harried co-workers, and instead let us organize our days around a small number of discrete objectives."
One avenue that is experimented with is the use of visualization tools such as kanban boards to get an overview at a glance, structure smart processes and intelligently divide the work equally. We can't wait to evaluate the learnings of all these forced experiments.
A few episodes ago we wrote about the power of using accountability partners in getting things done. The whole technique revolves around a single commitment, done in a moment of strength, that feels hard to break later on. In these lockdown times, 'remote' variants are turning up all over the net.
On Ness Labs, a gathering place for people interested in mindful Productivity, multiple weekly sessions are planned to work on non-urgent, important work together (like writing articles). The practice consists of committing to being present at the call, dialing into a large-scale (typically 20-50 people) Zoom call with the camera on to then work in silence before the timer runs out. Judging from the comments & attendance, the practice is a help for a lot of people to get meaningful work done.
YouTuber Ali Abdaal took the practice a step further this week, inviting his viewers to join him while he spends an hour writing his new book. The first edition generated a turnup of over 800 people.
I love how these seemingly strange innovative experiments can turn into something useful for a lot of people. It's a nice balance for the negative effects of technology that are discussed elaborately these days.
I have a love-hate relationship with Social Media. The internet has the potential to be a great connector in our human fabric, but it can also create toxic atmospheres and echo chambers. It can be a huge distraction too. My latest weapon against distraction has been a thorough cleaning and separation of apps. As a rule, 'consumption' apps for media can now only live on my iPad, leaving my iPhone void of anything that lends itself to 'doomscrolling'.When I do engage in Social Media, I keep Nir Eyal's words in mind:
Distraction is only distraction if it really distracts you from something. Otherwise it's entertainment. The physical act of grabbing my iPad signals 'entertainment', making it a conscious choice instead of unconsciously sliding from 'checking my mail' to 'checking Twitter' on my iPhone.
Yesterday I played around with new social kid on the block 'Clubhouse'. It's an audio-based discussion forum, where you can follow people and subjects you're interested in. Anyone can listen in to a 'room' and the moderators can invite listeners to chip in (you can raise your hand), sometimes making for some interesting conversation. The audio focus makes for great consumption while walking, driving or doing chores.
Zooming out to the greater narrative, I love seeing these new apps as experiments on digital human interaction. Clubhouse is an experiment, like Facebook and Twitter continue as an experiment. If we don't like something we move on to an alternative (Signal, anyone?). The tools are evolving, and so is our use of them. The growing pains are merely an indication that something is growing.
In 2020, valuations of especially IT and technology companies increased to unimaginable heights. Companies like Apple, Amazon and Microsoft surpassed the one (and even two) trillion dollar mark. Unimaginable? Well, actually, no.
Consider this chart that was published by Jeff Desjardins in 2017:
I knew the Dutch East India Company was valuable in their time, but did not realize it was worth the same as the 20 largest modern day's companies. Note that the comparison was made in 2017. Taking current day valuations, it is 'just' as large as the 5 biggest companies.
There are some striking similarities. The Dutch East India Company was a young company when it reached its valuation heights. It started operations in 1602 and was worth $7.9 trillion after only 35 years. First of all, it was granted the monopoly for trading spices with Asia. Secondly, the company's growth and strong, speculative interest to join in its successes coincided with the tulip mania reaching its peak.
Though current day big-tech companies have not been granted any monopolies, they do occupy very strong market positions through which they can influence pricing in their own interest. In addition, with interest rates reaching new lows, more and more people flock to the stock market to find a new destiny for their money.
One thing is different. Europe has lost its position on the World's stage when it comes to producing valuable companies. That's not necessarily a problem, certainly not when we're able to bring the world together and remove some economic boundaries. Internet was meant to do just that.
There is an interesting paradox that may lead to important new biological and medical insights. The so-called Peto's Paradox, named after the British epidemiologist Richard Peto, says that the rate of cancer in a species does not correlate with the number of cells an organism has. Studies have found that large animals such as elephants, rhinoceroses and whales hardly have any known cases of cancer. This is paradoxical -at least to our current understanding- because the more cells an organism has, the more likely it is that these cells start mutating uncontrollably and cause cancer.
There are at least two possible explanations, stemming from specific cancer defences in large animals. Studying and understanding these may have a big impact on the way we fight cancerous tumours. The first explanation is the presence of so-called tumour suppressor genes, which essentially ensure that a large animal's cell needs many more mutations than a human cell needs to become cancerous.
The second potential explanation for this paradox involves so-called hyper-tumours. These are best envisioned as a tumour for a tumour. Before a cancerous tumour grows big enough to become a threat another tumour takes over, draining the 'original' cancer cell from its energy sources and thus starving it. The hypothesis is that in large animals, these hyper-tumours are more present to do their job.
Both potential explanations seem logical and could be turned into treatment methods. Nature has so much to learn and inspire us! The trick is to know where to look.
In the debate about the usefulness of playing video games, there is positive news for the proponents. The Economist recently reported on research conducted that 'playing video games in a lockdown can be good for mental health'. Timing is interesting as across the world, various governments and institutions are contemplating introducing regulations to protect children from “excessive screen time”.
According to Professor Andrew Przybylski of the University of Oxford who led the study, there is a lack of robust evidence for many of the supposed harmful effects of video games. In the project, the gaming habits and mental health of 2,537 players with an average age of 31-35 years in Britain and North America were analysed.
The article concludes about the study:
"The researchers found that people who played the games for longer reported feeling better, on average, than those who barely played at all (see chart)."
I spent quite a bit of time staring at these graphs. Even though the blue dashed line suggests the mentioned correlation due to its (very) slight inclination, one could easily conclude that the cloud of dots suggests video gaming does not influence well-being at all. Other factors may be in play.
For instance, the mere act of playing (video) games or simply the thought of being able to do so could release endorphins and other pleasure hormones. This short-term relaxation may thus in itself explain the results. In addition, many of these video games involve social interaction, which might explain the perceived benefits in itself.
Please let me know how you interpret the graphs. One thing is certain: as long as my kids are gaming, they're not reading the Economist and they won't use these graphs as evidence in discussions with me!
While we are busy electrifying our car fleet in an increasing pace, another industry responsible for a large part of global emissions is still way behind in terms of big advances using this technology. I had the pleasure of seeing a small electric plane last year capable of short flights with two people aboard, proving electric propulsion has definitely made its way into a small segment of aviation. Long haul airplanes however, do not look as they are transitioning to electricity any time soon.
Popular science channel 'Answers with Joe' on YouTube sums up the current status nicely and full of humour in their episode 'Are electric airplanes doomed?'. The biggest reason for jets stil lagging is the energy density of batteries. Even with all the advances in Lithium Ion battery technology, jet fuel can still hold 45-90 times more energy per kilogram. Long haul jets needing to carry 45-90 times the current fuel weight is physically not an option. Furthermore, batteries keep their weight during travel, while fuel is burned, improving the weight ratio.
Joe closes his story with the question whether we will be able to innovate ourself out of this predicament or whether we should adapt our behaviour. For the business travel part, I feel the current pandemic has shown we can often do without, while remote meeting technology is still improving at an astounding pace.
Last week, Google announced it will abandon the use of the so-called 'cookies' to track your internet browsing behaviour. The news was picked up broadly by media across the world. In a publicly released blogpost 'Charting a course towards a more privacy-first web', the company said:
"Today, we’re making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products."
I could not help but think: "yeah right, what's the catch?". Full disclosure: I'm not a big Google-fan and generally avoid any product they put out there. However, billions of people use at least their search engine, so it's worth noticing what Google's strategy and ambition is.
After analysing the statement, it's clear that the advertising model that pays for Google services will not be scuttled. Rather than 'cookies', an array of other techniques will be used to enable targeted advertising. An example is that Google will create groups based on similar interests and showing everyone in this group advertisement specific to those interests. In addition, Google’s Chrome browser will generate an anonymous profile of your interests and use it to request appropriate advertisements. To understand more of the new techniques, Owen Williams published a short, easy-to-read summary.
As long as there's no price-tag on Google's products and services, most notably their search engine, there will be a catch. After all, "when a product is free, you're the product" (apparently the quote dates back to a 1973 exhibit by the artist Richard Serra). Depending on how you value your privacy and in which context, there are good alternatives to all Google products. That comes at a different price: you have to put some effort into it to find the ones that fit your needs.
Last week marked 10 years since the Fukushima Nuclear Disaster. The combination of an earthquake and a tsunami caused a cascade of events, ultimately leading to 3 nuclear meltdowns.
These events were an illustration of one of the biggest drawbacks to nuclear power. While the risks are small, failure can immediately have catastrophic results. Combined with the long-lived toxic waste that nuclear power generation produces, the events led to a halt in nuclear programmes in most of the Western world.
The Economist evaluated the current state of nuclear affairs, and came to the conclusion that our response to the most recent disaster might not be the best in the long term. While solar and wind energy are on the rise and have become cheaper than ever before, looking at our energy consumption only in the Watt-hours dimension paints a flawed picture. The sun and the wind only provide us with intermittent power. As battery technology is still not mainstream, we will need other sources for a reliable grid.
"Despite this, safe and productive nuclear plants are being closed across the rich world. Those closures and the retirement of older sites mean that advanced economies could lose two-thirds of their nuclear capacity by 2040, according to the International Energy Agency"
If the gap resulting from this is filled by fossil-fuel infrastructure, this will then be in place for decades. For good reason, Bill Gates invested heavily in next-generation (safer) nuclear technology. He decided to stop his efforts after Fukushima because political will was gone. The Netflix show 'Inside Bill's Brain' gives you a great peek inside the process and the pitfalls of dealing with the matter. Our intuitive response to nuclear energy might not be in our best interest for the long term.
In general, artists have never been amongst the best paid people on this planet. You get a skewed image if you focus on the top 0.01% of painters, actors, writers and musicians, but these exceptions cannot hide the fact that the majority of people in the arts struggle to make a living with their craft.
It fascinates me how this contrasts with the way we consider all of our art forms to be uniquely human and invaluable to our well-being. We have gotten solace and comfort from art in our darkest moments and much of what we consider our historic achievements is art.
With modern incarnations of Patrons (like patreon.com) as an example of technology solving this problem, I was therefore quite enthusiastic about the dawn of the NFT. An NFT (Non Fungible Token) is a way to buy digital art based on Blockchain technology, albeit without the formal legal and economic ownership of the piece.
The NFT space has exploded recently, with many investors buying NFT's of everything from digital artwork and writing to videos of NBA action shots. Last week, a collage of work by Beeple was sold at Christies for US$ 69 Mio (not a typo).
My enthusiasm of NFT's took a turn for the worse after being confronted with Seth Godin's view and then reading this more in-depth article. While quite extreme in its conclusion, I have yet to find the fundamental flaw in the reasoning behind it. If NFT's are a modern incarnation of a pyramid scheme, it's not about to end because of a lack of buyers, but because of a clash with our moral and environmental values.