Gordon Briggs and Matthias Scheutz, from Tufts University’s Human-Robot Interaction Lab, are trying to figure out how to develop mechanisms for robots to reject orders that it receives from humans, as long as the robots have a good enough excuse for doing so.
In linguistic theory, there’s this idea that if someone asks you to do something, whether or not you really understand what they want in a context larger than the words themselves, depends on what are called “felicity conditions.” Felicity conditions reflect your understanding and capability of actually doing that thing, as opposed to just knowing what the words mean. For robots, the felicity conditions necessary for carrying out a task might look like this:
- Knowledge: Do I know how to do X?
- Capacity: Am I physically able to do X now? Am I normally physically able to do X?
- Goal priority and timing: Am I able to do X right now?
- Social role and obligation: Am I obligated based on my social role to do X?
- Normative permissibility: Does it violate any normative principle to do X?
via IEEE Spectrum
November 20, 2015
Case 5 is going to fail in all cases, so that’s nice. “I am standing on the table, stylish_dismounts.lib failed to load, it can’t get much worse…what is the tipping use-case in this room?”
Yeah, I can see they drilled on the bar crane safety drills, then went all through the No Mo’ Dykes book looking for the right use-cases.