I loved those articles such as this one from the Verge ‘Watch a police officer admit to playing Taylor Swift to keep a video off YouTube‘ showing how people are adapting to take advantage of Artificial Intelligence to deceive the system.
The point being that YouTube deletes all videos with copyright infringement, and therefore by playing music while being video filmed, US policemen ensure those videos of their interventions will can’t be uploaded to YouTube. Brilliant! (I am not sure how well that works though!).
Anyway that’s a good example of how people adapt their behavior to deceive the AI and digital ecosystem. I am quite sure there are many more strategies used by the tech-savvy to evade modern surveillance and ubiquitous photos and cameras. And we may implement new behaviors more and more to adapt to this digital world.
It is just the start of adapting our behaviors to deceive the digital ecosystem and AI surrounding us. Expect this to become much more prevalent!
Leo Babauta in his post ‘Delight in Uncertainty‘ explains how most people have difficulties with uncertainty in our lives, and highlights the positives associated with appreciating this uncertainty. As a recovering foe of uncertainty, this certainly resonates with me!
“We don’t like uncertainty, we want to avoid or control uncertainty, we get stressed when we can’t. And uncertainty is unavoidable: everything is uncertain all the time!“
So what can we do? First take stock that certainty is too boring: “We might instinctively dislike uncertainty, but in truth, we would be so bored without it.” Uncertainty is also the place to learn and to grow.
It is not easy to welcome uncertainty, and it does require an amount of practice. The usual corporate world generally does not provide it. Personally, I am practicing since I have started my company and I am not quite too sure who my clients and what my activity will be in 3 to 6 months! And I end up enjoying it because I know this provides space for unexpected opportunities.
Leo Babauta insists on some practices to learn to welcome uncertainty: notice the uncertainty, dance with it, set the joy in it and dance with it.
Whatever your approach, it is very satisfying to have a confident relationship with uncertainty. And yes, it takes time and practice because our Industrial Age education taught us about how to behave in a certain world. Nevertheless we can embrace uncertainty, and dance with it!
Following up from our previous post ‘How to Recognize Crazy Innovative Ideas that Aren’t so Crazy‘, and taking a slightly different viewpoint, this article from the Conversation addresses one of the main aspects of decision-making: ‘Gut feel or rational analysis? Both may be vital in finding winning ideas for new markets‘. It summarizes some research on the processes followed by companies trying to find winning ideas to penetrate new markets. And the conclusion is clear: intuitive approaches are more powerful to uncover new markets.
Like individuals, “Some [companies] will instigate procedures that encourage more analytical decisions – for example, using formal idea evaluation tools such as grid analysis techniques and weighted point-rating evaluation matrices. Others may opt for more informal ways of evaluating ideas, which leave more room for evaluators to draw on their intuition.”
The conclusion of the research is quite clear:
- Rational idea evaluators tend to seek out ideas that focus on a company’s current strengths.
- Intuitive evaluators focus more on identifying opportunities to enter new markets.
- For intuitive evaluators, a highly formalised evaluation process reduces their emphasis on finding opportunities in new markets.
Hence it appears that intuitive approaches are more suited to exploring new market opportunities, and it also appears important not to introduce the rational approach too early in the process, letting intuitive approaches identify opportunities first!
This interesting article ‘Space Force scientist warns it’s ‘imperative’ the US military experiment with human augmentation and AI to stay ahead of Russia and China‘ expose how military competition leads into human augmentation. And what happens in the military will undoubtedly spread later in civilian usage.
“[this Space Force scientist] announced we are entering the age of ‘human augmentation,’ which is crucial to the US’s national defense in order to not ‘fall behind our strategic competitors.‘
It proposes in particular to use self-learning algorithms to develop innovative strategies (such as AlphaGo algorithm that has self-taucht how to play go). This would lead to a battlefield combining human and IA agents (including probably drones). Therefore, human agents will need to be augmented to be able to fully work together with AI and fully participate in the battlefield.
This development was expected but we can now anticipate that it may go faster due to increased competition in the arms race between nations.
The challenge I believe will be to effectively combine the virtual battlefield with the real battlefield conditions: in effect the twin battlefield will have to reflect actual conditions on the ground and this will certainly be a major challenge in the years to come.
Valeria Maltoni in her post ‘Social Media Bubbles‘ reminds us that “Social media algorithms determine what you see when you search and scroll the platforms. Not your friends.” Therefore, we at the mercy of an algorithm update. Hence the idea by some governments to regulate those updates.
We all know that Google or Facebook algorithm updates create substantial disruptions in the way search results or screen results are displayed, creating considerable dismay to all those that depend on this natural or paid advertising for their enterprise. It also funds a coterie of search gurus and naturally increases GAFA revenues as people finally end up paying to get better visibility.
The Australian government has been particularly at the forefront of trying to regulate the GAFA. “If the bill passes in one form or another, which seems likely, the digital platforms will have to give the media 14 days’ notice of deliberate algorithm changes that significantly affect their businesses. Even that, some critics argue, is not enough for Big Tech.”
It is interesting to recognize that this shows that GAFA are increasingly seen as a sort of public service with real-life implications on the life of people and companies. Of course this is a bit contradictory with the commercial nature of those companies.
This tension between public service and the nature of the GAFA as commercial enterprises will only increase in the coming years as we become increasingly dependent on their services.
In this very interesting article (in French) ‘L’expert et le politique face à l’inconnu‘ the authors express how there is a substantial difference between the “unknown” and the “uncertain”. In uncertainty, the context of decision-making remains somewhat stable allowing some sort of objective, analytical approach; when in a crisis, the unknown prevails: the rate at which the environment changes is faster and thus decision-making has to be done according to other criteria.
A crisis such as the covid crisis will be ranked in the ‘uncertain’ category: the knowledge and the environment changes faster than the usual environment for rational decision-making, and this is why expert groups have not always been relevant, and why political decision-making has become so important. Readability of the expertise becomes difficult, as it may change significantly in the course of time due to the fact that knowledge has changed significantly.
The most important point of the paper is that in the ‘unknown’ situation, our usual decision-making approaches are not quite relevant, but we often fail to identify that, or at least we identify it too late to be quite useful. Our organisations, reporting and decision-making processes are not fit for the ‘unknown’.
It is essential to get better at identifying those few situations that evolve so fast that they are in the ‘unknown’ category and make sure we understand that decision-making needs to be different.
Following up on our previous post ‘How the GameStop Stock Event Ushers a New Era of Collective Resistance‘, new reports have emerged of bots having also had a significant impact on what happened to this particular stock (see ‘Did Bots Help Push GameStop And Other ‘Meme Stocks’? A New Report Says Yes!‘ or ‘Thousands Of Bots May Have Played Role In GameStop Hype: Report‘. Those bots could have influenced users of social networks by amplifying an existing trend.
“It should be noted that real human beings did indeed start the conversation and push surrounding the GameStop stock and other meme stocks. The report indicates that bots were at least partly responsible for hyping and promoting these stocks once the initial Redditor-inspired campaign took off, however.” We thus seem to be in the presence of a phenomenon of bot-amplification of a trend.
This shows how bots can amplify or conversely moderate the impact of trends on social media, depending probably on how they are driven by the people who create them.
This example, like many others, shows how public opinion can be highly vulnerable to bots that mimic social media users and serve to amplify certain trends, shares and opinions at the expense of others. It is a great opportunity to explore for those that want to highlight their views, and at the same time an issue to be regulated to make sure that opinion diversity remains and to avoid extreme trends to prevail.
I like this quote by Helen Keller: “Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run than outright exposure. Life is either a daring adventure, or nothing“.
This quote brings forth several aspects. The first is that security is a superstition, and although we strive for it and try to get it in modern society, we need to be aware that this is not a natural situation and should be careful not to become too naive.
This immediately brings logically the fact that it is not good to constantly avoid danger. While one can be prudent, it is quite useful to be adventurous.
And in any case, being adventurous brings us new discoveries and enlightens our lives.
Definitively, security is a not a natural state and we should not let ourselves become complacent. Life needs to be an adventure.
Cyberattacks and the more systemic issue of cyberwar is becoming a concern, and some expect that 2021 or at least the 2020s will see the first emergence of visible cyberwar. May elements point to the increased usage of government-backed attacks in the cyberspace: from Russia involvement (read Wired ‘Russia’s global hacking efforts are going to unwind in 2021‘) to a number of events in the Middle East around Israel, Iran and other neighboring countries.
However, as the Wired post explains, the government hand is more and more obvious and the excuse of unknown private hackers is quickly becoming inadequate. In addition, cyber defenses are developed. “The allied objective will be deterrence by denial, raising the costs to the Russian attackers (including identifying the culprits by name) and reducing the value of expected gains. In 2021, we will have active cyber defences of government networks and those of critical national infrastructure to identify hostile penetration attempts.“
Cyberattacks have become much more prevalent with Covid confinement and increased remote work, however one can be now be certain that the threat has been identified and as always some form of arms race will happen around cyberattacks.
Cyberwar – with actual impact on infrastructure and physical life – or at least cyberattacks – may become an actual factor of international security in the few years to come.
In this interesting article ‘Party supporters shift views to match partisan stances‘ a Danish scientific paper is mentioned that studies how the opinion of political party members changed after the leadership of the party changed. They found that opinions could change significantly to match the leader’s.
“Supporters of a political party change their policy views “immediately and substantially” after that party switches its position on an issue, new research suggests, a sign that political elites could be shaping the opinions of the voters whos views they are supposed to represent“
In general, this is aligned with my experience in (business) organisations: I am always amazed how quickly it is shaped by the leader, and this is particularly visible in good or worse when the leader changes. However it was for me less obvious in the case of a looser setting like a political party.
And indeed it is an interesting question in this case as the political party is supposed to represent the views of its members. Or is it really? Is not more a way to align over a number of main positions to seek power? This certainly provides interesting food for thought about the operation of modern democracies.
I like this quote from Muhammad Ali: “Looking at life from a different perspective makes you realize that it’s not the deer that is crossing the road, rather it’s the road that is crossing the forest.”
I find it is quite a striking example of how to look at things differently, or at least how much distance we need to take to be able to look at things differently.
I find that taking a systemic view of complex situations is extremely helpful. This is an extremely important way to reach the root cause of issues.
One further step, once we have a good systemic view, is to try to go one dimension up so that we take a really all-encompassing view of the situation. This is where we can attain such viewpoints like the one of Muhammad Ali about the deer crossing the road.
Exercising about changing viewpoint and going up in terms of observation level is quite an important skill to better understand our world. Are you practicing?
This Wired article ‘How 30 Lines of Code Blew Up a 27-Ton Generator‘ exposes at length the Aurora experiment in 2007 – how a small file was able to destroy a large diesel generator hardware, demonstrating the vulnerability of hardware to hacking (also in Wikipedia ‘Aurora Generator Test‘). This shows that well thought hacking can be targeted to destroy fully non-digital hardware. What about our vulnerability now that IoT is widespread and most hardware also host a large amount of software?
I find this article worth reading because it shows that hardware destruction was carried out indirectly, analyzing ways of making it dysfunction. It requires a lot of analysis and is not straightforward, but remains impressive as it shows that is could be relatively easy to disrupt heavily the infrastructures we have come to rely on.
It demonstrates that with the right focus and willingness, hacking can have a substantial impact on hardware. This was again demonstrated as well with the famous affair of the destruction of Iranian centrifuge uranium enrichment facilities through hacking.
The scary part is with our increasingly connected hardware – cars, key house control systems – our vulnerability has probably increased many times over the situation a decade ago. There is probably something to be done to ensure that our infrastructures remain secure in the future!