Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s intense fear about the way the technology works. A 2017 MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned, “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Institute at NYU recommended that public agencies responsible for criminal justice, health care, welfare, and education shouldn’t use such technology.
Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of AI, the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.
There’s particular concern about this in health care, where AI is used to classify which skin lesions are cancerous, to identify very early stage cancer from blood, to predict heart disease, to determine what compounds in people and animals could extend healthy life spans, and more. But these fears about the implications of black box are misplaced. AI is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can then do for patients and the entire health care system. After all, the black box in AI isn’t a new problem due to new tech: Human intelligence itself is — and always has been — a black box.
Let’s take the example of a human doctor making a diagnosis. Afterward, a patient might ask that doctor how she made that diagnosis, and she would probably share some of the data she used to draw her conclusion. But could she really explain how and why she made that decision, what specific data from what studies she drew on, what observations from her training or mentors influenced her, what tacit knowledge she gleaned from her own and her colleagues’ shared experiences, and how all of this combined into that precise insight? Sure, she’d probably give us a few indicators about what pointed her in a certain direction — but there would also be an element of guessing, of following hunches. And even if there weren’t, we still wouldn’t know that there weren’t other factors involved, of which she wasn’t even consciously aware.
If the same diagnosis had been made with AI, we could draw from all available information on that particular patient — as well as data anonymously aggregated across time and from countless other relevant patients everywhere, in order to make the strongest evidence-based decision possible. It would be a diagnosis with a direct connection to the data, rather than human intuition based on limited data and derivative summaries of anecdotal experiences with a relatively small number of local patients.
But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies and weather forecasts to how we conduct much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of AI: human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. Think of what happens when a couple gets divorced because of one stated cause — “infidelity” — when in reality there’s an entire unseen universe of intertwined causes, forces, and events that contributed to that outcome. Why did they choose to split up when another couple in a similar situation didn’t? Even those inside of it can’t fully explain it. It’s a black box.
The irony is that compared to human intelligence, AI is actually the more transparent of intelligences! Unlike the human mind, AI can — and should — be interrogated and interpreted. From the ability to audit and refine models and expose knowledge gaps in deep neural nets to the debugging tools that will inevitably be built and the potential ability to augment human intelligence via brain-computer interfaces, there are many technologies that could help interpret artificial intelligence in a way we can’t do in interpreting the human brain. In the process, we may even learn more about how human intelligence itself works.
Perhaps the real source of critics’ concerns isn’t that we can’t “see” AI’s reasoning — it’s that as AI gets more powerful, the human mind becomes the limiting factor. It’s that, in the future, we’ll basically need AI to understand AI. In health care as well as in other fields, this means we will soon see the creation of a new category of human professionals who don’t have to make the moment-to-moment decisions themselves, but instead manage a team of AI workers — just like commercial airplane pilots who engage autopilots to land in poor weather conditions. Doctors will no longer “drive” the primary diagnosis; instead, they’ll ensure that the diagnosis is relevant and meaningful for a patient, and oversee when and how to offer more clarification and more narrative explanations. The doctor’s office of the future will very likely include computer assistants, on both the doctor’s side and the patient’s side, as well as data inputs that come from far beyond the office walls.
When this happens, it will become clear that the so-called “black box” of AI is more of a feature, not a bug — because it’s more possible to capture and explain what’s going on there than it is in the human mind. None of this dismisses or ignores the need for AI oversight. It’s just that instead of worrying about the black box, we should focus on the opportunity — and therefore better address a future — where AI not only augments human intelligence and intuition, but perhaps even sheds light on and redefines what it means to be human in the first place.
This op-ed originally appeared in The New York Times.
Find them wherever you listen to podcasts.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.