Wired: Artificial Intelligence’s Faulty Foundations?

Wired: Artificial Intelligence’s Faulty Foundations?

by Joseph P. Farrell, Giza Death Star
April 12, 2021

 

There is no doubt the world is moving through a “digital age paradigm shift”, and the next step is the much-vaunted artificial intelligence. The signs are all around us: Mr. Globaloney of finance crapitalism (as we like to call it here) has for decades been executing commodities, securities, and equities trades with computer algorithms, and now wants to role out a cashless world with digital “currencies”, linking them to social credit systems and other draconian measures, like “vaccine passports”.  The result will  of course be a one-way mirror behind which Mr. Globaloney hides his own corruption. Additionally, we’ve seen article after article of a “transhumanist” stripe of how Mr. Globaloney wants to merge man and machine. Just last week I blogged about the US Army’s new “virtual reality” headset to enable soldiers to see better and to make tactical decisions better.

The only problem, as I pointed out in that blog, was that the headset contract had been awarded to Baal Gates’ Microsoft, which doesn’t bode well for the tactical situation of the future: “Please suspend your firefight while Windows completes your update. This will take just a few minutes. We apologize for any inconvenience to your platoon or your enemy.”

Beyond this, I’ve tried to sound the warning about this reliance on such systems by pointing out that no cyber systems are ever totally secure, that major powers have their own cyber warfare departments in their militaries, and that computer trading on markets only divorces them more and more from actual human risk assessment, as the pricing mechanism more and more reflects the aggregate “decisions” of algorithms.

But with the move to Artificial Intelligence, a new danger looms: what if the foundational principles of Artificial Intelligence are themselves ill-founded? That’s the question addressed in the following article from Wired magazine by author Will Knight, that was passed along by L.G.L.R., and it’s an article well-worth pondering in its entirety, beyond the snippets we quote here:

The Foundations of AI Are Riddled With Errors

Ponder the following observation in connection with last week’s blog about the US Army’s new virtual reality headset:

The current boom in artificial intelligence can be traced back to 2012 and a breakthrough during a competition built around ImageNet, a set of 14 million labeled images.

In the competition, a method called deep learning, which involves feeding examples to a giant simulated neural network, proved dramatically better at identifying objects in images than other approaches. That kick-started interest in using AI to solve different problems.

But research revealed this week shows that ImageNet and nine other key AI data sets contain many errors. Researchers at MIT compared how an AI algorithm trained on the data interprets an image with the label that was applied to it. If, for instance, an algorithm decides that an image is 70 percent likely to be a cat but the label says “spoon,” then it’s likely that the image is wrongly labeled and actually shows a cat. To check, where the algorithm and the label disagreed, researchers showed the image to more people.

But why the mistaken labeling to begin with? This is where it gets “fun,” if it weren’t for the fact that under certain circumstances, like the US Army’s headset, or a self-driving automobile, people’s lives were not at risk.  It seems that image recognition is based on massive statistical databases of people’s responses to ambiguous images:

ImageNet and other big data sets are key to how AI systems, including those used in self-driving carsmedical imaging devices, and credit-scoring systems, are built and tested. But they can also be a weak link. The data is typically collected and labeled by low-paid workers, and research is piling up about the problems this method introduces.

And then there’s the problem of selection bias:

Algorithms can exhibit bias in recognizing faces, for example, if they are trained on data that is overwhelmingly white and male. Labelers can also introduce biases if, for example, they decide that women shown in medical settings are more likely to be “nurses” while men are more likely to be “doctors.”

(I can’t wait for “wokeness” to be programmed into the US Army’s virtual headsets…)

Believe it or not, I couldn’t help but think of this problem in relation to a problem that my co-author Gary Lawrence and I pointed out in our book about the Common Core educational bruhaha, Rotten to the (Common) Core, namely, that with the move to computerized instruction in addition to computerized standardized testing, the biases of the “experts” and “programmers” of the tests  often over-ruled actual facts, rendering standardized testing a means of determining conformity to a narrative or point of view, and less and less a determinant of the ability to think critically. My favorite example is the hypothetical multiple-choice question “Who killed President Kennedy?” with the multiple guess answer “(1) The Soviet Union, (2) Cuba and Fidel Castro, (3) Lee Harvey Oswald, (4) A cabal of insiders representing various interests inside the US government.”  Well, you can guess which answer will be “correct.” On a more serious level, Lawrence and I pointed out the running battle between mathematician (and friend of Albert Einstein) Banesh Hoffman, and the Eductional Testing Service in the late 1950s and early 1960s, when Hoffman absolutely impaled the Educational Testing Service on a poorly phrased physics question from one of its SAT tests, and then, when the ETS “experts” tried to defend their “correct” answer, made matters much worse. And Hoffman produced a variety of questions from actual tests to drill the point home. Sadly, no one really listened, so here we are, with one of the dumbest populations on the planet, and virtual reality headsets in the Army being run by Microsoft.

The bottom line, in other words, is that thus far standardized tests and artificial image recognition systems still require the human input… but that input becomes quite problematical when the data is from the lowest common denominator and collective, and one already dumbed-down to boot.

So is it a cat? or an enemy tank? Or a float in a parade? “Please suspend your firefight while Windows completes your update. This will take just a few minutes. We apologize for any inconvenience to your pla—”

“ERROR ERROR… Your image database update transfer has been interrupted; communication with the host is not possible.”

Newspaper headline: “Experts: Recent Data Transmission Interruption During Firefight was Russian Interference.”

See you on the flip side…

 

Connect with Joseph P. Farrell

image_pdfimage_print
Share: