Rise of the Slaughterbots

Comments

      • I’m not sure, but there was a rumour that Google took over Boston Dynamics in order to scale back its military contracts and redirect the technology to more peaceful purposes.

    • Paul Van Hoven films are the best, the gore when the robots kills him is over the top but conveys the point clearly.

      Starship Troopers is another favourite of mine. In particular I love the commercials of the future in both films.

  1. The Traveling Wilbur

    Gives a whole new meaning to the phrase “My robot army is bigger than your robot army.”.

  2. ErmingtonPlumbingMEMBER

    I’ve got a mate who is an Aircraft engineer (trained by the Army) has only recently acquired a mobile phone and refuses to have any kind of Online profile at all,…esp not face book!
    I always thought him a little paranoid,…but after that clip,…Holy $hit can!,…I think he’s just being prudent.

    Maybe I might pull back a little, on my criticisms of Global Plutocracy and calls for greater Democracy from now on.

  3. Miniaturised drones, programmed, autonomous in the sense no one pushes a button for final kill, apart from that, wholly an act of man.

    Clean kills. No indiscriminant unscripted fatalities. Cost effective. What would Obama have done?

    “As Donald Trump assumes office today, he inherits a targeted killing program that has been the cornerstone of U.S. counterterrorism strategy over the past eight years. On January 23, 2009, just three days into his presidency, President Obama authorized his first kinetic military action: two drone strikes, three hours apart, in Waziristan, Pakistan, that killed as many as twenty civilians. Two terms and 540 strikes later, Obama leaves the White House after having vastly expanding and normalizing the use of armed drones for counterterrorism and close air support operations in non-battlefield settings—namely Yemen, Pakistan, and Somalia.”

    “The 542 drone strikes that Obama authorized killed an estimated 3,797 people, including 324 civilians.”

    • mild colonialMEMBER

      I’ve always suspected that the climate around me cooled with my (other) earnest colleagues when I said that I didn’t like Obama, some years ago. And I didn’t even particularly follow his doings. As Helen Razer said this week, before we call Trump the biggest racist, think about Jefferson enslaving his own swarthy children and Obama bombing brown kids. As Germaine Greer said last week, at least Trump isn’t doing exactly as he’s told (with the caveat that may still end up being regrettable). (Upset you missed Germaine? 🤣🤣Listen to her on RN morning website interviewed by Hamish).

    • The other side of the coin is you don’t have to program them who to kill, but what traits you’d like them to target.

      Genocide, anyone?

    • GunnamattaMEMBER

      All militaries in the developed world, and the usual non developed world – Russia, China for starters – are well on the road for this type of possibility.

      Drone miniaturization is already within reach of this sort of stuff, and the ability to place anti individual warheads (from explosive devices, to chemical agents, to biological agents) is either there or close enough to see and has lots of research dollars being poured into it.

      The ability to deploy control drones, or control them via satellite genuinely remotely is still a little ropey but is obviously on the way – which in a military (if not public control) sense means that the first defence is highly likely to be some form of electro magnetic pulse which is controlled.

      The real issue still is in individual recognition – its is fairly straightforward in a military sense when you may be able to pick all the people in a particular location or all those with particular distinctive features (an item of clothing some form of rank insignia etc) but where a drone needs to either relay information to a controller or be pre programmed to identify a set range of features it becomes a load more complex.

      If the underlying assumption is that there will be photos of all of us sufficient to personally identify all of us in our social situation then we may still be some way off. My initial thought is that the possibility mitigates a desirability for living away from large urban centres (where essentially every last individual would be photographed fairly regularly).

      I think it is also worth thinking that the facial/physical recognition required is not far from being the same as lie detection (a map of a face according to different responses and situations can betray whether the individual is telling porkies or not) and if that is the case then the same technology becomes a giant ‘bullshit’ monitor which should be available to report on every last person in the media or politics (for starters) – it is the same as the technology (still being developed for anti terrorist purposes – but very very close) which involves people boarding planes being asked a series of questions to determine their risk from the facial measurements (and body temperature) and the like.

      The other thing that I tend to foresee when I see stuff like this (also fairly notable in the recent film Blade Runner 2049) is that there will be likely low tech counter measures in particular locations or some sort of impetus to ‘first strike’ from the completely unknown (both individuals and locations)..

      One assumes the Chinese will be quite capable of making zillions of whatever it is that is required and that Australia (having outsourced manufacturing and design types of stuff) will require lots of iron ore sales to buy some.

      God only knows what the future holds but there will be some of this in it – it may be that it is a defence/control measure for the plutocracy, but it may also be a vehicle for motivating far superior social outcomes and considerable monitoring of the plutocracy as well. My thoughts in the wee hours…

      • “If the underlying assumption is that there will be photos of all of us sufficient to personally identify all of us in our social situation then we may still be some way off.”
        That ship has long sailed. The reason you are no longer allowed to smile in passport/drivers licence photos any more is because smiling makes it harder for the facial recognition software. Facebook already has the software and the database to achieve this. I don’t have a single photo attached to my facebook account and I still get auto tagged by their facial recognition software in posted photos.

      • mild colonialMEMBER

        How to fight back? Oil on ground? Ball bearings? Plastic face hoods? Dental cotton wool rolls in cheeks? Thick make up? wigs, hats and glasses? Burqa? A device that puts up a magnetic shield around us. Magnets that can be activated around our houses. Glue patches. Fishing line fences. Water pistols. Transmission zapper apps in our phones. Moving to Hay or Broken Hill. My thoughts on how to fight drone assassins in the early hours of the morning.

      • I did a couple of Masters in the UK at a very well known “school” over 8 years ago which dealt specifically with artificial intelligence in the military and robotic warfare – the technology at the time was far in advance of what you are talking about.

        Even my own software development I was using facial recognition well over 8 years ago and the available data was massive. Not many people remember this but the Facebook API was exposed for facial recognition (every photo you post has the persons name tagged automatically) – the data is 100% there.

        The capability of drones to identify anyone within a western society would be 95% at LEAST – while even things like Googles Photo software which is quite extraordinary (sorry Apple users – but Apple truly is crap) has been able to identify baby photos of adults for many years – its quite amazing.

        Here is the future in China ALREADY !

        http://www.wsj.com/video/next-level-surveillance-china-embraces-facial-recognition/9ED95BFA-76EF-48DA-A56B-50126AFDDA1C.html

        Peter Singer Wired For War.

      • I could build what you need using Azure Congnative services in about an hour.
        I could even make it able to monitor social media and automatically take out anyone who said something negative about me ..

      • The reason you are no longer allowed to smile in passport/drivers licence photos any more is because smiling makes it harder for the facial recognition software.

        No longer ? Passport photos have required a “neutral expression” for at least the 25 years I’ve had one.

      • GunnamattaMEMBER

        @Zentao

        Thats great. Could you build me an app to read every last face in any form of media, and listen to every lasr vouce in media – and tell me when they are bullshitting. Then i want it to produce the word ‘BULLSHIT!!!!’ on every last TV or computer or mobile screen, or loudly announce ‘This person is speaking bullshit!’ from each local device when this occurs.

        I have dreamed of the perfect bullshit machine for so long!,,,,,,

      • @Fred

        “Apple truly is crap” I guess that you are referring to the facial recognition startup iPhone thingo that they have been pushing recently. Well big deal. Apple is the goods for more mundane things. Such as ease of use for those of us who happen to be non-nerdy oldies.

      • “and tell me when they are bullshitting”
        Facial recognition and lie detection are completely different things, your underlying assumption is false.

        “Passport photos have required a “neutral expression” for at least the 25 years ”
        Good indication of how long facial recognition software has been available to governments.

      • GunnamattaMEMBER

        @bjw678

        OK it may not be perfected yet but these will give you an idea of how they certainly are linked.

        The premature quest for AI-powered facial recognition to simplify screening

        The app that knows if you’re lying: Online ‘polygraph’ uses artificial intelligence to study your face for subtle signs you’re being deceitful

        This one is a bit older from MIT but gives you an idea of where they are heading…..

        Lie Detection: To a few human experts, our faces are open books. Now computer technology automates those abilities.

        I also took part in a test for some Russian software along the same lines back in 2009, which the developers wanted some non-Russian people to run their tests over (they thought there were possibly different cultural habits and facial expressions betraying people telling porkies. My experience lead me to the conclusion the technology worked with a high degree of accuracy (though not perfect). I understand that security vetting in the ‘five eyes’ world remains profoundly interested too.

  4. For the first 3 minutes, I thought it is a real product! Then I saw the fake news channel (SDN).

    3 grams of explosive? Is it RDX or something?

    I am living in the Most Lootable City and drones would indeed come in handy for catching home invaders. Have CCTV cameras atop street lights along with microphones and smart software to detect the sound of windows being smashed. As soon as a window is smashed (especially at night), the drones should automatically surround the window and record video of whoever comes out of the window/house. Police should watch the footage live from their car and the drones should play a recorded voice to the home invaders saying “do not move”. And if the bandits move, get the drones to taser them or capsicum spray them but not kill them. Then the cops can come and arrest the bastards.

  5. reusachtigeMEMBER

    I want some of those robots so that I can program them to hunt down sick crashniks and relieve them of their pain!

  6. Existing laws of armed conflict and targeting processes: Only humans can make legal decisions regarding targets.

    “We need to ban lethal autonomous weapons systems!”

    *makes video where bad humans are still the ones making illegal decisions*

    Don’t get me started of technical feasibility. Or the fact that they ripped off a Black Mirror episode where all of the purported technologies are developed for other purposes anyway and any old random can assemble them, legal or not.

    • They didn’t rip off Black Mirror, they just covered similar territory to the bees episode. Black Mirror doesn’t own science fiction…

      I’m curious: what existing laws of war specify that only humans may make targeting decisions?

      • You missed the point. It’s not about stealing scifi ideas. It’s about the inevitability of the technology pathway that the proposed ban does nothing to prevent (and may well make worse).

        Amongst other things, deliberate targeting decisions require legal approval to ensure a legally defensible position. Those making the decision must weigh up the proportionality of the military objectives vs expected collateral damage. No AI system is making these decisions any time soon.

      • Ah the joys of arguing with people on the Internet! I didn’t “miss the point”. I disagreed with something you said. Whether that was your main point or not is immaterial.

        And nothing in your second paragraph answers my question: what existing laws of war specify that only humans can make targeting decisions. Notably you don’t quote the law of any actual country or treaty, just a paraphrase of what you kinda think the law across several countries generally might be. The legal picture is probably a good deal murkier than that. Take the example of torture: it was regarded by most people as against the laws of war and then suddenly there are legal opinions suggesting it is not illegal or that certain torture-like behavior is not actually “torture” etc.

        To be clear: there are no, as far as I’m aware, laws anywhere that specifically require human beings to make targetting decisions. You BELIEVE that the current phrasing of the law in certain unspecified jurisdiction HAS THE EFFECT of requiring human decisions — but you don’t really know. And somewhere a lawyer is writing a legal memo which challenges this view. Or ignores it. Or whatever. You might try watering down some of that instant internet certainty and maybe try speculating about the future with the rest of us…

      • Laws of Armed Conflict (LOAC) are an actual thing that most countries in the world are signed up to.

        Concepts such as proportionality are necessarily subjective since there is no perfect metric for weighing the cost/benefit of the consquences of military actions.

        Either the technology is good enough to make such assessments to the satisfaction of the legal systems in which case what is the point of a ban, or the technology is not good enough in which case employment of said systems would create consquences in which the operators would easily become war criminals and thus would not buy/use them.

        The technology is nowhere near good enough (and nor will it be any time soon) to handle the complexity required to make such decisions, ergo existing legal frameworks already provide adequate protections.

    • I dunno about that. Australian warships have been able to fight autonomously since around the mid 1980’s. In other words, the ships can (or used to be able to) be set to operate in a mode where engagements are conducted completely automatically to remove operator reaction time in high threat environments.

      I believe the Phalanx CIWS also has such a mode built in. It would shoot anything that moved including the ships helicopter, and I heard it referred to as defending your house by having a rabid bulldog chained at the front door.

      You don’t want to fly your passenger aircraft full of civilians at a ship operating like that.

      • There are many examples of systems that will behave in a defined way and autonomously execute a mission that a human has authorised it to conduct.

        If I make a decision to throw a grenade in a room, then I am legally accountable for the damage that it causes. If a warship captain choses to turn on an automated defence system that will use missiles and CWIS (these systems don’t operate in isolation) to automatically engage threat signatures detected within a defined range (no this won’t include your own assets) then that captain has made a legal decision in relation to targeting.

  7. That clip would make Paul Verhoeven proud. Unless it’s his work. If it is, then the master has still got it.

  8. Nice to see we’ve just made nuclear weapons redundant – except they make great EMP devices. So now, to save our collective selves from a foreign attack drone swarm we have to use an EMP and wipe out all our electronics…welcome to the future of buggy whips and horse and cart

    Or, the technology will be used by local powers to eviscerate the non-productive elements of society. The end of welfare spending on bogans, oldies and minorities. No more dissidents. Billions fewer people on the planet. The end of global warming is at hand via extreme population control coming to an airspace near you

    Aren’t we humans so damned clever…at perverting anything useful into Frankenstein

  9. Bugger bitcoin. I’m starting up a company making Ronald Reagan face masks. Oh, and don’t use the same deodorant twice….

  10. Hoping to get a job programming these types of drones. Could settle decades of old scores in a few minutes. What was that about “the meek shall inherit the earth”

    • Understandable misspelling in the translation of the Bible. The original said “and The Mech shall inherit the earth”. Of course that meant nothing to the Romans et al.