The Regret Intent

Systems that offer grace / forgiveness will have to communicate clearly the kinds of correction they will afford, and the depth of those corrections.

How does a system communicate (through its design, its affect, its documentation, etc) where and how these kinds of forgiveness are available?

This post is just a note-to-myself about a possible direction you could go in answering this question. The direction proposed is basically a structured interaction in which the person volunteers “I regret yesterday,” and the system goes from there: “Here are the things I can alter from yesterday. Shall I delete them? Rewrite them to spoofed, characteristic activity instead of specifics? Should I “blur” the records and make them less accurate?”

The goal here is to work towards systems we can trust to accommodate us in this open, accepting, and confidential way; I don't engage here with the kinds of regret or what situation might result in a regret utterance but I am sure you can imagine some of your own.

As a simple example, let's look at the Undo function as a rough shorthand for a personal intent expressed as “I regret my last action in this software context.” Here, the depth-of-correction is the number of separate Undo actions that the system will be able to perform. The breadth-of-correction is the set of verbs that are undo-able.

For a more complex example, think of “I regret hosting Donald yesterday.” Do we erase his IMEI/MAC info from our router's logs? Do we blur or clobber DropCam/Nest/IP_camera records?

I frame this as “regret –> forgiveness” specifically because I don't want a system to take these actions on its own, or to make its own conclusions about the intended depth of a correction – I want people who use the system to be able to trust that they understand which actions are permanent, and which are negotiable.