Editorials

Do You Consider Robots (Automation) a Threat?

Increasingly having conversations (including one on FB in the last couple of days) about automation and robots and what it means to jobs and duties… so I’m curious what you think of it all?

To me, it seems pretty likely (unavoidable?) that things that can be decided based on data will some of the most prime – and learning from masses of information (here’s the call to big data) also lends itself to this.

Just in the last several days there are articles that suggest that robot doctors will be able to use Google’s AI to diagnose cancer more quickly than human doctors. And at the same time, we’re hearing more and more about self-driving cars, about automation of systems…

To bring this all home to our work specifically, today, in many of our projects, there is already substantial automation. If a server goes out, it replaces itself, complete with code updates, testing and then sending an email to us when it’s done.

If systems need additional capacity, same thing. The automation steps in and scales out the servers and operations, then shoots a quick email as it scales up, and scales down when the heavier loading is relaxed.

These are pretty basic, but it shows that things that rely on factual happenings are already there. Azure self-tunes databases and operations and performance. Many cloud providers will optimize servers automatically, will even shut down run-away processes. I’ve written here before, too, about the Azure and its ability to “sniff out” transactions and activity that seem suspect, including making suggestions to you on what needs attention.

One of the things that always bugged me when writing code (I was a Pascal guy), was “missing ;” – well, if you know where (you just gave me the position and line number mr. error message), WHY NOT JUST ADD THE DUMB THING?!? Yep.

Clearly the automation is coming forward on administration, troubleshooting, configuration, performance, etc. I welcome it. I think it will be faster to recognize issues, make faster decisions, etc.

Of course all the danger areas are still there. Terminator movies, anyone? Sure, those are a bit away (I hope) but you can pretty quickly see that decision point for automation where they are picking what stays, what goes… WHO stays, WHO goes.

Do you consider this automation a threat, or a leverage point for your work with systems? Do you think it’s something that will enhance the employability of an administrator if they know how to work with automation solutions and things that are screaming ahead?

I, for one, welcome it. At the same time, I think the impact of full-on automation will be stunning, to say the least. I believe it’s a planning horizon thing, and something that warrants attention and thought and management now, not later. Not in a conspiracy type of way, but in a not sticking our heads in the sand and pretending it’s not arriving sooner than we think, kind of way.

Facebooktwittergoogle_plusredditpinterestlinkedinmail
  • Eilenblogger

    As with all things, it can be used for good or bad purposes.

    You can use an automobile to get from one location to another or you can use it to mow people down in a crowd.

    I recently saw a robot on the news that moved eerily like the ones from Hollywood.

    It doesn’t take a genius to extrapolate and imagine the Russians hacking the robots and realizing a Hollywood script.

    So yes, embrace it and build safeguards for worst-case scenarios.

    I think AI will be huge for the medical industry. Imagine having the knowledge of the medical profession used by professionals to diagnose and treat disease. That is infinitely better than relying on the inherent flaws of a few individuals who may or may not have had a full night’s sleep or a fight with their spouse prior to your appointment.

    This is all part of our natural evolution as an intelligent species.

    It’s more likely that we will extinct ourselves from the survival instinct of greed long before the robots take over the world anyway.

    And it’s not like we have any control over it all.

    Men Will Be Men or more aptly, Man Will Be Man.

    • So true (good and evil). That control is going to be important, but near impossible over time, I suspect.

  • Virgil Rucsandescu

    During history, human race used any single invention to improve weaponry… Now imagine AI being added to the wonderful weapons we already have.
    I am not afraid of AGI (Advanced General Intelligence), even if we can probably build that I am sure research on that will be stopped at some point by the elites (it’s easy, there are only 5 or 6 places on Earth where it can happen)… And even if they don’t stop it, I have the gut feeling it’s going to be the best thing in human history…
    I am just afraid of humans who will control the specialized AI’s…

    • I agree, overall, that it’s a good thing.

      I think the biggest challenge will be that control. And control of the AI and its ability to self-manage will be very, very challenging. I’ve been reading up on the point where “they” learn faster than we do. What happens then?

      The whole thing of “protect humans” — but humans are likely the biggest threat to their own existence…

      All very ethereal I suppose for a data platform type site, but still…

      Hey! It all has to be stored and managed and accessed in database technologies. There, brought it home. 🙂

      • wombat_7777

        see my comment above…..whether a firearm or being able to pull the plug another way, a kill switch is essential. You can see how the “Terminator” Skynet scenario could unfold….

    • wombat_7777

      I was watching “West World” recently , and one thing that jumped out at me was how dangerous AI could get. Ironically I said to my wife that with AI running things, a loaded sidearm is essential for being around humanoid robots, no if buts or maybes…..especially ones strong enough to kill you. They are machines and should be treated as such. I have no issues pulling the trigger on a machine with, no hesitation.

      Yes AI will rpelace some human stuff, but its dumb humans that try and push it beyond its limits…ironically if AI messes up, it will requires humans to mop it all up.