Kagan: Weakest link in AI chain is the human factor
Artificial Intelligence is one of the most exciting, amazing and powerful new technologies we have ever developed. However, there is a flaw with AI. Whether it is helpful or hurtful, right or wrong, good or bad depends on one thing. It depends on the human factor. It depends on who writes the code and what that code says.
You see, AI does not appear out of thin air. It is not provided by god or nature. In fact, at its origin, it’s just computer software. It’s not good or bad, right or wrong, left or right. It’s just software.
Whether that outlook is positive or negative, good or bad, glass is half full or half empty is all determined by the code that runs it.
And this code is written by man. Men and women, with all their flaws in thinking and feeling. We are all very different. And as we know, people have many flaws.
Just look at the arguments between left and right in the most recent election results to see these differences in real time.
We are only as strong as our weakest link. And AI is only as strong as our weakest link.
I put to you that AI, as powerful, strong and amazing as it is, still has this weak link, the human factor, at its heart.
Artificial intelligence gone rogue
So, we must use our intelligent and fair minds to protect our society from AI gone rogue. If we don’t protect ourselves and our society, we will be responsible for all the significant damage that will occur as we move forward.
We cannot be sucked into the la-la-land thinking everything will be OK all by itself. Don’t be fooled. The problem is, if AI thinks like you do, you won’t see a problem. However, if it does not think like you do, then you will think it’s flawed.
It’s not a question of right or wrong, only of perspective.
I know that today there are plenty of leaders and companies in the AI space who are honest and trustworthy. I have already met many, who have briefed me on who they are, what they do and what they want to accomplish.
However, that does not mean all are honest and trustworthy with this nuclear weapon of technology.
Also, there are plenty of others in the space today who are driven by ideas other than being honest, fair and trustworthy. Yet, everyone has a smile and a convincing story to tell.
How do we tell the difference? And that is a key problem.
In fact, many leaders are honest as the day is long, they may just have a skewed outlook. Something that works for them, but not what the rest of the world would agree with.
These are the people, companies and technologies we must recognize and protect ourselves from.
We tend to think a machine is honest and unbiased, but it’s not. A machine is nothing. It calculates faster than we can on our own. However, it does not have an opinion.
Does AI give a computer the power to think?
AI lets a machine have an opinion. That sounds wonderful. And it can be as long as it is not corrupted, either intentionally or simply because that’s the take on the world that the code writer has.
People can all look at the same thing and have different opinions.
Example, is the glass half full or half empty? What about the pictures that come out every year which we all play games with and say what we see? It seems half the population see’s one thing and the other half see’s something else.
Both are correct. There are no rights and wrongs. But both are very different.
If that’s the case, how can we expect a machine to understand this and give us the correct information we want or need?
This is the dilemma with AI.
How we deal with innocent or purposeful misdirection is a real challenge that we need to have a real solution for before we depend too much on artificial intelligence.
We need to think about and address the flaws in AI. The human factor in Artificial Intelligence: It is the weakest link to this otherwise amazing technology.
The question going forward is very simple, but very significant. How do we steer AI and keep it on the right path? We had better be thinking about it very seriously. Our future depends on it.