How We Should Regulate and Protect Our Genetic Data

Recently the case of the Golden Gate Killer which has been arrested many years later thanks to the availability of a genetic and genealogy website has shown that our genetic data is not quite safe.

As explained in this paper ‘Here’s the ‘open-source’ genealogy DNA website that helped crack the Golden State Killer case‘, a relative of the suspect underwent a genetic mapping to connect genealogically. Thanks to this public database of thousands of voluntarily uploaded profiles, connections were made to the suspect’s family. It seems that in the US at least there is a trend to get one’s DNA mapped.

The case sheds light on a little known fact: Even if we’ve never spit into a test tube, some of our genetic information may be public — and accessible to law enforcement. That’s because whenever one of our relatives — even distant, distant kin — submits their DNA to a public site hoping to find far-flung relations, some of our data is shared as well.”

I don’t think genetic data protection is enforced through personal data protection laws at this moment, and this issue of closeness of family members’ DNA is also an issue as most of our information is detained by someone else! We may need to regulate the access to our genetic data a bit closer quite soon. Of course, the data should remain accessible by law enforcement through strict supervision by judicial powers, but its availability should be strictly limited.

Share

How People Constantly Take Decisions Based on Opinions

In my consulting work I am permanently astonished how much people tend to take decisions based on opinions without even taking a few hours to establish some quick facts about the situation.

Of course we all know that at the end we tend to take decisions based on our feelings, and that over-analysis is not good. However in the professional field it is astonishing to see how many substantial decisions impacting many people are taken with limited analysis or basic fact checking about orders of magnitude. A common example in my speciality is project scheduling, and scheduling forecast. It is quite easy to establish the current slippage of a schedule and the current productivity level compared to the expectations. Decision-makers do not even take a few minutes to establish those facts.

As a consultant a substantial part of my job is to establish some of those facts to question the worldview of decision-makers. And by doing that I am often disturbing because I often invalidate well established opinions. Up to the point that I often need backing by top management for those exercises.

Please take a few minutes to gather some basic facts and orders of magnitude before taking decisions. It would so greatly improve a number of situations. It is astonishing how many bad decisions are taken without basic fact-checking.

Share

How Artificial Intelligence Transforms the Ad-financed Internet

Following up from our previous post on ‘How the Foundation Principles of Internet May be Flawed‘, the issue with ad-financed internet only came up to the surface with the emergence of powerful Artificial Intelligence. As described in this TED talk ‘We’re building a dystopia just to make people click on ads

It may seem like artificial intelligence is just the next thing after online ads. It’s not. It’s a jump in category. It’s a whole different world, and it has great potential. It could accelerate our understanding of many areas of study and research […] And these things only work if there’s an enormous amount of data, so they also encourage deep surveillance on all of us so that the machine learning algorithms can work. That’s why Facebook wants to collect all the data it can about you. The algorithms work better.”

Therefore, it is possible that the rising of AI combined with the ad-financed model is the fundamental reason why the Google and Facebook are excessively collecting data on us. The issue is that this may lead to “building this infrastructure of surveillance authoritarianism merely to get people to click on ads.” It might be time to change the business model of the internet.

Share

How the Foundation Principles of Internet May be Flawed

There is an increasing uneasiness on the foundations and principles of internet. In a TED talk, ‘how we need to remake the internet‘, Jason Lanier explains a fundamental mistake was made in the 1990’s: the advertising model to provide internet content for free.

Early digital culture had a sense of a socialist mission: everything on the internet must be available for free (contrary to books, even if solutions like public libraries compensate to make them available to everyone). At the same time we loved our tech entrepreneurs that could dent the universe. The solution was the advertising model: free with ads. In the beginning it was cute. ”

The comparison with what happened before is of course not so abrupt, for example newspapers have long relied on ads to finance their activity and therefore could sometimes be opinionated, but everyone knew which side the paper was on.

And as it is proven now, the consequence of the advertisement model is quite incompatible with the principles to govern internet edited under a UN mandate that refer to universality and non-discrimination (refer to those principles here).

As Jason Lanier concludes, “Our species cannot survive in a situation where if 2 people want to communicate they need to go through a third person that wants to manipulate them“.

Share

How the topic of Ethics of Big Data and Artificial Intelligence is Growing

As already mentioned in previous posts, I find the excellent book by Cathy O’Neil ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy‘ more and more quoted in many publications. I also noticed that in the last report on AI in France by Cedric Villani, the topic of ethics was given a large place.

Ethics in Big Data and AI is essential. “Big Data processes codify the past . They do not invent the future . Doing that requires moral imagination , and that’s something only humans can provide . We have to explicitly embed better values into our algorithms , creating Big Data models that follow our ethical lead . Sometimes that will mean putting fairness ahead of profit“.

Some have complained that ethics has taken too much space in Cedric Villani’s report and that those considerations may make it harder to catch-up in the field of AI. However as recent experience shows, it is also essential not to risk the rejection of new technology because it would not be ethical, leading to crisis and non-acceptance.

Ethics and regulation in Big Data and AI is essential to create a balanced world where everyone will have opportunities. Let’s not avoid the debate and the regulations that will ensue.

Share

How Our Data is Really the Property and Worth of the Apps We Use: actual demonstration

Recently I was in South East Asia when Grab announced it was buying Uber’s regional business. Immediately Uber shut down, leaving drivers and clients stranded. But the most interesting is that as part of the transaction, Uber transferred to Grab the personal data of its users: history, trips taken, ratings etc. The article ‘Grab acquiring Uber’s data trove is a major red flag. Here’s why‘ explains the issue and why most people don’t care – but should.

Although users had to manually reset an account with Grab (most people had accounts with the two services anyway), this transfer of data as part of the business transaction reminds us that our personal data belongs to the service we are using and that it is an essential source of worth of those services.

Uber should be criticised for how it handled its data transfer. The company did not ask users for permission to transfer their data. Even by the low standards of tech companies, Uber didn’t even include a ‘click here to opt-out’ option.” Regulation about the privacy of data should definitely extend to the situation where the service we use is getting acquired.

Share

How Videos can now be Faked Easily And What It Means

There is a deep change coming upon us: the ability to manipulate videos is now so advanced that it is easy to do and videos can’t be taken any more as proofs. This issue is discussed at length in The Atlantic paper ‘The Era of Fake Video Begins‘.

I believe the impact of this technological change is quite underestimated, in particular because it will be democratized in the next months. Until now it was easy to fake or manipulate a sound track, but much harder to do so with video; in addition videos are intrinsically more believable because “we see it with our own eyes”. New technology allows easily to replace a face in a video, or else manipulate them in an undetectable manner. And this will be a huge problem because we have come to rely on video as a means of proof (for example, equipping police forces with video cameras). As the paper says, “We’ll shortly live in a world where our eyes routinely deceive us. Put differently, we’re not so far from the collapse of reality.”

Some call these manipulated videos ‘deepfakes’ and it is quite a good image. We won”t be able to believe any more what we see. Video certification programs will have to be developed to give us trust. Regulation will have to kick-in. A new world is coming.

Share

How Facebook Faces a ‘Big Tobacco’ Addiction Industry Problem

I find this post ‘Facebook has a Big Tobacco Problem‘ just excellent – and the title is great too. Facebook is clearly addictive, pervades society, has adverse effect on mental health, and.. is in denial.

Facebook’s problems are more than a temporary bad PR issue. Its behavior contributes to a growing negative view of the entire tech industry.“. Facebook is currently working hard to change its image, but the amount of evidence of its effect on behavior also mounts.

This existential issue is a threat to the entire technological world and society will have to find a solution that will necessarily involve regulation.

Some comparisons developed in the post are even thrilling: “As in the 1990’s, when Big Tobacco felt its home market dwindling, the companies decided to stimulate smoking in the Third World. Facebook’s tactics are reminiscence of that. Today, it subsidizes connectivity in the developing world, offering attractive deals to telecoms in Asia and Africa, in exchange for making FB the main gateway to the internet

It might well be that there will soon be some kind of existential crisis to make social networks mature in terms of model and rules.

Share

How to Leverage Randomness for Effectiveness

Amazon warehouses – the backbone of its effectiveness – have from an early stage been built completely random: stuff is stored wherever there is space. The computer system tracks everything and the company has found that it is more effective that way – less time to find where to put things, and more change to have a set closer when picking it up. It also saves space and makes space utilisation more efficient.

This Quartz post explains it all: ‘Amazon: This company built one of the world’s most efficient warehouses by embracing chaos‘.

This story is quite similar to what happened with emails: while in the past I used to sort painstakingly my emails in various folders, I just leave them in a dump nowadays and use powerful search features to find what I need. Email sits there in a random order, and I use another way to access the information I need.

With the increased power of mobile computation and networks, more and more applications will pop up for intelligent randomness. There are still a lot of areas where we do take the effort to sort things by categories; it is useful to consider whether this still makes sense nowadays.

It is amazing how randomness can be leveraged for increased efficiency and effectiveness!

Share

How We Need to Audit the Key Algorithms That Drive our Lives

As an example of our previous post ‘How Algorithms Can Become Weapons of Math Destruction‘, New York City has decided to audit the key algorithms used by the city to decide on their resource allocation.

The issue is described in detail in the post ‘New York City Wants to Audit the Powerful Algorithms That Control Our Lives‘. A task force will be created that “will audit the city’s algorithms for disproportionate impacts on different communities and come up with ways to inform the public on the role of automation“.

The issue of accountability is central; as algorithms take decisions that have huge impact on people’s lives (school admission, access to social services, whether to be kept in jail), we need to come up with a way to reinstate a sufficient dose of accountability in the decisions and how the codes are being developed.

This is a first globally and this initiative will certainly spread rapidly.

Share

How Algorithms Can Become Weapons of Math Destruction

I very much like the book by Cathy O’Neil title ‘Weapons of Math Destruction‘. The book basically shows that many algorithms used in our surroundings do rather reinforce racism and discrimination, creating huge social disruption and reinforcing social differences.

The book is quite pessimistic at time, still I believe it make a point. And we probably don’t realize how much we are surrounded by algorithms that have a direct impact in real life, like for example software used by police to decide where to be more present, or software used for college and university admission, or software used to filter our resumes.

This books comes as part of a growing trend of concerns about how much software may destroy humanity (such as in this post ‘Technology is Breaking Humanity‘. The positive reaction around this topic will certainly become a mainstream trend in 2018.

Share

How Social Media Ratings Can Be Tricked – Lessons Learnt on Personal Freedom

Tricking social media popularity has become quite an industry, and this includes social media fake user factories in many countries. One of the funniest testimony on the matter is the excellent “I Made My Shed the Top Rated Restaurant On TripAdvisor“. It is unclear if it is fully genuine, but it is worth the read nevertheless, if just for the creativity of the author (and the fake food pictures make-of).

The interesting point in this post is how the author managed to trick the fraud check of Tripadvisor (on the basis that nobody would fake a restaurant). It shows that the creativity of individuals is always greater than the creativity of the institutions behind the most popular services.

Of course this also leads to the question of how much we rely on those social services for taking daily decisions (because they add so much value to our lives compared to the previous guides and similar solutions), and if we are manipulated, to what level. We are certainly somewhat manipulated, if indirectly by other social media users, restaurant and hotel owners. Does it exceed a limit that really jeopardises our freedom of decision? Is it really more than before when we were manipulated by the editors of well known guides? The question is open. The level of scrutiny on the topic of fake news and fake ratings will certainly give us a clearer view on the matter.

Share