“While many AI and machine learning deployments fail, in most cases, it’s less of a problem with the actual technology and more about the environment around it,” says Harish Doddi, CEO of Datatron. Moving to AI “requires the right skills, resources,?and?systems.“
“While it’s arguably true that AI can add significant value to practically any department across any business, one of the biggest mistakes a business can make is to implement AI for the sake of implementing AI, without a clear understanding of the business value they hope to achieve“. In particular, understanding how data biases or poor data hygiene can affect AI algorithms, understanding those effects and how they influence performance appear to be an essential capability.
In addition, the organization processes and particularly the data production, gathering and structuring appears to be an essential area for review and upgrade when implementing AI-based tools.
Like any new powerful tool, AI has transformational impact on organizations and the way their data is gathered and managed. This should not be overseen when implementing those new capabilities.
“Creating hypotheses has long been a purely human domain. Now, though, scientists are beginning to ask machine learning to produce original insights. They are designing neural networks (a type of machine-learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns the networks find in data instead of relying on human assumptions. Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.“
The interesting part here is around reducing human biases, a topic which comes back several times in the article: avoiding preconceived ideas and theories and probably the burden of the institutional view on things. AI can provide an independent view and the combination can spark creative and innovative outputs.
I am convinced that we will find AI to be a great help rather than a competitor in all creative endeavors, like scientific research. And this is just the beginning!
In this interesting article ‘Is Amazon Changing the Novel?‘, the author takes a historical tack to explain how publishing media has always influenced the format of novels. And how the new diffusion channels like Amazon are now changing it again.
In the 19th century, novels were often published in episodes in newspapers or, as explained in the article, in several bands that could be borrowed from public libraries one at a time. This definitely had an influence on the way they were written, including the need to maintain attention through suspense at the end of each part.
Now “Amazon […] controlled almost three-quarters of new-adult-book sales online and almost half of all new-book sales in 2019” (in the US one can presume), and in particular through the publishing possibilities of e-books through Kindle publishing, is definitely changing how novels look like. What are the influences at play?
“The platform pays the author by the number of pages read, which creates a strong incentive for cliffhangers early on, and for generating as many pages as possible as quickly as possible. The writer is exhorted to produce not just one book or a series but something closer to a feed—what McGurl calls a “series of series.” In order to fully harness K.D.P.’s promotional algorithms, McGurl says, an author must publish a new novel every three months.” I also believe it tends to make novels shorter on average, as well as part of a series. A bit like Netflix promotes series over movies, with the result a much longer total time spent in front on the screen!
The rise of Amazon as a major publisher and a driving force in e-book publishing will shift the novel genre to shorter formats and new ways to consume them. The impact of the publishing media on the novel has always been there and continues.
“Meguro Ward plans to put all work involving floppies and other physical storage media online in fiscal 2021, and Chiyoda Ward plans a similar transition within the next few years. Minato Ward moved its payment procedures from floppies to online systems in 2019.”. Reliability is mentioned as one of the reasons why this outdated media is still being used – but also probably convenience out of habit!
In general in Japan I have not been overly impressed by the modernity of IT systems, like in the US when it comes to mobile handphone networks. It is a general rule that the location where something gets invented invests a lot in the infrastructure to support the first generations of the technology and then lags when it comes to adopting newer versions of the technology, because of the sunk cost in the existing infrastructure. Whereas those territories that adopt a technology later can directly invest into the newer versions.
We constantly underestimate how older technology remains hidden at the core of our modern life. I would not be astonished that many telecommunications companies still run 20- to 30 years old equipment in some core functions, just because they are reliable and maintainable.
This reminds us that technology transition to a new generation technology is never as complete and comprehensive as what we generally think, and that innovators tend to maintain older technologies online longer.
While “targeted towards telecom scams, which are among the most rampant crimes in China. In 2020 alone, Chinese police reportedly cracked around 250,000 such cases“, “there are concerns over the extent to which the app is surveilling users.” The app is asking for a lot access and apparently is reacting whenever users do things like consulting foreign web sites.
This is an interesting example of what can happen when there are no strict laws regarding the use of personal data. Of course we all know that it is relatively easy to access phone data for someone with the capabilities to do it. However, having users installing an app with potentially a significant spying capability is something new at this scale.
This provides another example of the always delicate balance between the benefits of technology and its potential drawbacks, and how regulation is essential to protect citizens. The same issue is also of concern with Facebook and others. It is time for strict regulations to come into force that reflect our country view on personal freedom and the need for surveillance to avoid social disruption.
The point is that AI needs to be trained, therefore it will be trained on existing content. And existing content is scrapped from the web. The post goes into detail on the origin of those training databases used to train content-generating AI. It is not a huge database, and while the people that have created them have tried to filter the worst, it mainly contains average content.
“Which means that natural language models will inherently be biased towards creating mediocre content, content that’s readable and coherent, but not compelling or unique, because that’s what the vast majority of the language is that they are trained on.”
The post continues with actual real life experiments on actual content, asking for an AI algorithm to fill-up the remainder of the text. It appears quickly that it can’t imitate unique writing styles or unique ideas.
Of course the problem is that AI generated content can be produced much more easily and may flood media, drowning the best content. On the other hand, people have learned where to find unique content so for the foreseeable future, as long as your content is unique in terms of style and content, AI will not be able to imitate you!
The trend is particularly acute in the US where health insurance is provided by private companies. It starts with some shocking statements about examples of companies prohibiting smoking or other possibly health-impacting behaviors because of insurance fees; and other companies promoting health-welness programs which appear to be quite mandatory. “Wellness programs are about exercising that leverage, reducing the risk profile of employees and thus cutting the employer’s costs for health insurance plans.”
However in the modern world, this means using apps and other devices to monitor progress and connect with colleagues, and those could ultimately be used for control purpose. Examples are give, from insurers that require wearing of personal health monitoring devices, or fitting cars with black boxes to determine driving patterns. All leaving to possible insurance access and price discrimination, leading to a much more personalized behavior influence. Boundaries to this approach and rules around fairness will have to be imposed by law.
With the development of personal devices and technology, insurance companies will certainly find a field of improved insight into client behaviors. Lawmakers will have to follow those trends closely to put the right boundaries.
In this post ‘Negative marginal cost‘, Seth Godin highlights that non only digital allows to produce at zero marginal costs, but that when network effects are added, marginal cost can actually be negative.
Negative marginal cost means that it costs more to produce less, or that it costs more to have less people connected and contributing. The network effect being exponential creates situations where the value generated by one additional user actually benefits the community by its presence.
As Seth Godin writes, “Moving from expensive to cheap to free to “it’s a bonus to add one more person” changes our economy and our culture forever.” Zero marginal cost was already the internet revolution; negative marginal cost is the social network revolution.
While this explains the exponential development and success of social networks, it is still useful to remember that internet uses a lot of resources and energy, and I am still not sure whether the marginal cost would remain effectively negative when we add those in – that’s actually quite an interesting research topic.
We always underestimate network effects like we underestimate exponential growth, and they indeed create an advantage to add users. We are just at the beginning of the network revolution!
“A pop-up warning that a news item or website contains dubious or disputed information will not save us from bad information, but it will at least get people thinking. They will need to make a conscious decision to ignore the warning. Hopefully they will instead consider the links and references provided to more reality-based sources. This is basic digital literacy.“
It happens that many of those plug-ins seem to be already available (at least in English) and some are mentioned in the post. They will provide warnings and truthfulness indexes to sites and news consulted by the user. At least this would prompt verification across sources.
Of course this will not prevent some people from believing that this would be some additional conspiracy preventing them to spread their truth, or people just igniring the warnings. Fake news are not new: what’s new is that they can spread globally and exponentially for zero cost. Identifying fake news is a first step in regaining our freedom.
I am looking forward to good quality fake news checking to become standard. Of course in this weapon race fake news will become better at evading the checks and there will be an ever continuing race to uphold real facts.
Seth Godin in this post ‘The inevitable decline of fully open platforms‘ shows how fully open platforms (i.e. without any content curation and filtering) fall pray to spammers and inappropriate content. Still, there is also a need to maintain some balance in the administration of the network so as to benefit from its full capability.
“The tension is simple: If a platform is carefully vetted and well-curated, it meets expectations and creates trust. If it’s too locked down and calcifies, it slows progress and fades away. […] Too much curation stifles creativity, opposing viewpoints and useful conversation. But no curation inevitably turns a platform over to quacks, denialists, scammers and trolls.”
Even on private social networks, such as the ones that can be implemented by large organisations, curation and administration is required. This is something that is often forgotten, and it is clear that it can sometimes be seen as purely censorship. The balance needs to be clearly set between removing offensive content and removing content just because it would not please the owners of the network. One does not want to end up like an autocratic regime where any content contrary to the currently acceptable political opinion is removed.
Debate rages whether the Facebooks, twitters and other social networks do curate sufficiently or too much; whether they use enough resources for that. The balance is not easy, but as any social network founder, curation is probably one of the most strategic activities when operating social networks.
In this interesting post ‘Big Tech, The New Space Invaders’, Frederic Filloux explains how Big Tech is invading space services with the money and brutality that will change significantly this market.
He describes all the emergence in the field of space-based services and how the GAFA are now launched on a frenzy of acquisitions. “Space has become an inescapable part of their core business of data collection, transfer and processing, with multiple layers of applications, including a growing demand for AI processing. For the consumers of satellite images and signals — insurance companies, defense, agritech sector, financial services — working in Amazon, Microsoft or Google Cloud environments is almost the natural thing to do as the tools are de facto standards.” For example, ““Amazon played it quite well by offering to the US Geological Survey and NASA to process the huge volume of data generated by its Landsat program. They did it for free in exchange for bulk access to the data. That was meant to be mutually beneficial.”. It was particularly beneficial to Amazon Web Services which is now the standard gateway to access and process satellite data. AWS provides unparalleled storage and computing power with dozens of easy to use applications dedicated to spatial analysis, refined and trained by Landsat’s trove of data“
This of course creates questions about strategic dependence on those strategic services to American companies. According to Frederic Filloux this would also be a strategy aimed at minimizing the impact of the current antitrust drives – making the GAFA indispensable to the US national security.
In any case this is clearly a deeply preoccupying evolution to see space-generated data being increasingly captured by the GAFA and an awakening of governments on the topic would be useful.
In a newsletter, Christopher C Penn (link to his blog Awaken your Superhero) writes about the ‘demise of the T-shaped marketer’ with the argument that AI is eating the concept rapidly – producing quickly mediocre content but thus replacing the generalist aspect.
The ‘T Marketer’ is someone with a vast array of generalist skills and a particularly deep area of specialization. It is widely recognized to be a rare beast – and that such people have a very high value on the market. It is quite rare because it is difficult to be both a strong generalist and a strong specialist as this requires quite different intellectual approaches.
Any way, Christopher C Penn’s point here is that as AI develops (and while it is still producing quite mediocre output), it is much better at bringing together all sorts of information and it thus in competition with the generalist aspect.
“Why does this myth of the T-shaped person endure in marketing and business? The reality is that most of the time, mediocrity is sufficient to get the job done.” “As the line of mediocre output from AI advances, it will do more and more of the mediocre work, the stuff that everyone can do to some degree. That line advances a little more each year; three years ago, natural language generation was in a sorry state of affairs. You wouldn’t even consider using machine outputs for final product. Today, machines can write the same bland press releases humans can, with the same average level of quality. Three years from now? Those machines will probably crank out better blog posts than the average person.” The conclusion would thus be rather to focus on being really good at something special. “Good enough isn’t good enough any more.”
It is quite a good question before I personally strive to achieve something like a T-shaped competency, because I believe complementing deep expertise with the breadth of generalist approach is quite beneficial. The question is really how much generalist thinking can inform and make even better the specialization area. I am convinced that while one must definitely be very good at a narrow domain, keeping a broad overview is still quite essential.