Addressing the future with a better ConOp (or is it better humans we need?)

My daughter Stephanie graduated from MIT this year. (See “The girl engineer in my life” from 8 years ago.) The Commencement speaker was Sheryl Sandberg, Facebook COO and best-selling author. Her entire address can be found at the MIT News Office webspace. Even though she never used the word systems or engineering within her speech, she spoke eloquently about my chosen profession of Systems Engineering. Her focus was on Technology and its effects on the real world, which so happens to also be a preoccupation with Systems Engineers as well.  Instead of Systems she spoke of “Technology” and instead of Engineering she spoke of “building technology.”  

 
Sandberg gives 2018 MIT Commencement address

Sandberg gives 2018 MIT Commencement address

 


Her essential point was to create a more accurate Concept of Operations (ConOps). As she explained how technology can be misused and misappropriated by the end users, she pointed to the obvious fact, “...what we build will be used by people — and people are capable of great beauty and great cruelty.” But maybe it’s not that obvious. 
The INCOSE SE Handbook talks about  “The ConOps document describes the organization‘s assumptions or intent in regard to an overall operation or series of operations of the business with using the system to be developed, existing systems, and possible future systems” [from section 4.1.2.2] and how one of the stakeholders we should consider is those “who oppose the system” [from section 4.2.2.2]. But what about those who will misuse the system?

Sandberg didn’t shy away from Facebook’s recent trouble, explaining, “we didn’t see all the risks coming. And we didn’t do enough to stop them. It’s painful when you miss something — when you make the mistake of believing so much in the good you are seeing that you don’t see the bad.” In other words, could they have written a ConOps that explains how Facebook could be used for nefarious means? And with that knowledge built in safeguards within their platform?  Futurecasting is not a strong human capability so it’s unfair to ask Zuckerberg to be a master prognosticator when we can’t predict the future ourselves. But as I expounded in my blog post, “Predicting or Creating the Future?,” we have to at least  try to look at the ramifications a technology will have in society. 

Sandberg put it best by stating, “We are not indifferent creators — we have a duty of care. And even when with the best of intentions you go astray — as many of us have — you have the responsibility to course correct. We are accountable — to the people who use what we build, to our colleagues, to ourselves, and to our values.” So this is more than an exercise of writing a good ConOps. We must feel the responsibility  —  the duty  — to do right by our creations and the people who use them. And as Sandberg warns, “the fight to ensure that tech is used for good is never over.”

As  Systems Engineers, we have a tendency to believe that technology is the answer. Build something better in order to make the world better. The only problem with that hypothesis is the fact the world is filled with humans … unstable, unpredictable, unruly HUMANS. Sandberg insists that “we need technology to solve our greatest challenges.” Do we? Or do we need better understanding, tolerance, compassion, and empathy? Better technology or better humans? Or can we have both? Better technologies enabling better humans.
Sandberg ends with the admonition, “Seek advice from people with different perspectives, look deeply at the risks as well as the benefits of new technology — and if those risks can be managed, keep going even in the face of uncertainty.”  An uncertain future is the one thing we can be certain of, but working together with understanding, tolerance, compassion, and empathy — that uncertain future can be a better future.