Editorials, Ethics

How Do Systems Learn, When the Questions Aren’t Known?

I don’t know if you’ve noticed, but we’re quickly gaining on having information that we can use to help address some of the issues going on in the world.  Almost predictively.   You can tell because so many time we’re able to look back on an incident and see the information bits that we known and draw conclusions about whatever is happening (or happened) and see that it could have been known ahead of time.

I feel like we’re in that “last mile” situation where the last step is a doozy.  At the same time, we’re clearly headed in that direction – that of using information from all sorts of sources to better understand things.  Trends.  Business happenings.  Social happenings.

There are some caveats here though, and the answers aren’t as easy as they may seem.

The first is how do we figure out how to use information we have from all sorts of sources and trust levels, apply morals and ethics, pay attention to privacy requirements and not move into abuse of the information… all while still getting answers to questions we’ve not asked yet?  Yep, that’s a doozy.

We don’t know what to ask in advance.  That’s where machine learning will come in – but how do we integrate in those other pieces to what the systems can learn, what they can do with it all?  I’m frankly not sure we can.  I think we’ll quite possibly get to a tipping point where the value of the conclusions outweighs the concerns about those other pesky issues.

How many public safety issues (shootings, terrorism, disasters) will we endure before we all turn and just say “damn the torpedoes…” and let loose the machine learning to discover what it may and we use that information to start being more proactive about these things?

Which brings me to caveat two.

If you haven’t seen it, you’ll want to watch Minority Report.  As soon as we step into predictive modeling and taking action on that information, we’re almost instantly in that realm of preventative information analysis and application.  In the movie, people get to the point where they’re prosecuted and removed from society for crimes they WILL commit, but haven’t yet.  The authorities know this from behavior signs and information that is analyzed and charges filed.

Forget the pure criminal analysis – think about just daily stuff.  We’re so very close to this now.  There is a service that will analyze the lines at the amusement park based on sunset times, holidays, school holidays, travel patterns, reservations and such.  It knows how to take this information and suggest the right times to be in line to save your time for a given ride.  Pretty cool.  Clearly it’s advanced for right this minute, but it’ll be basic and commonplace very soon.  Because we’ll be off applying the predictive learning to other things in our lives.

How do we know the questions to ask – what information do we need to know to make decisions?  What if it messes up? (It will) What if we miss something?  (We will)

This last mile has so much potential, good and bad.  It’s not long though before we toss caution to the wind and just let the systems ingest everything they can get their hands on and learn.  The benefits will outweigh the risks (at least that’s what the rationale will be) and we’ll have foibles along the way.  But The first time we apply this to prevent some nasty issue or cure some disease with unknown related factors, it’ll be the reasoning we need to allow it to expand.

Is it possible to develop machines with morals and ethics?