• 0 Posts
  • 11 Comments
Joined 4 months ago
cake
Cake day: February 13th, 2025

help-circle




  • Yes. Exactly. They’ll also deploy upgrades to themselves painlessly. Thankfully that’s never been a huge ongoing pain felt by everyone paying attention.

    (I couldn’t resisit adding a “yes, and” to your point.

    Edit: And the AI agents will back themselves up correctly, too! We trained them on the activities of all currently living IT engineers, and the average of our work always results in a successful backup…

    If that wasn’t true, we would be having a new ransomware crisis every month…

    I’m sure glad we live in one of the good timelines, and have plenty of clean correct code and configuration data to train our AI on!

    (This is, of course, sarcasm. Companies that shift to AI IT agents today can expect to very quickly reach today’s median IT outcome. There’s not enough popcorn in the world for what is coming.)


  • Stupid question but what is stopping the software engineers to poison the well?

    Great question. I agree with other responses - it happens, and there’s motive to hush it up, so we tend not to hear about it.

    It’s also just really hard to tell the difference after the fact between “Dave sabotaged us” and “no one knows how to do what Dave did”.

    But I’ll add - there’s currently little need motive sabotage AI implementations. Current generation AI is largely unable to deliver on what is promised, in a business sense. It does cool but useless things, like quickly generating low maturity code, and writing a summary any seven year old could have wtitten.

    Current generation AI adds very little business value, while creating substantial risks. Nevermind that no one knows how Dave worked, now no one knows how our AI works, and it’s so eager to please everyone that it lies at critical moments.

    Companies playing around with current generation AI to boost next quarter’s stocks will hit plenty of “find out” soon enough, with nothing beyond the natural consequences of ignoring their own engineers advice.

    All that to say - if we see what looks like sabotage, it may well just be the natural consequences of stupidity.


  • With the federal government gutting funding of it’s own agencies, we may see more of this.

    Federal laws are effective if they’re effectively enforced. If states lose confidence in federal enforcement, it makes sense that they will try to do their own thing, and see if the federal courts are understaffed and lethargic or able to act.

    And if the federal government succeeds in using AI instead of human staff, then all each state will need to do is pass the same law a few different times with slightly different wording to hit the right gap in the AI.

    There’s interesting times ahead.