Exciting (and instructive) specific examples.
And I agree with your assessment. To me also they feel like "weak functional replacements," or I might say "re-presentations" of misreading or "simulations" of misreading. The whole idea can get pretty scatalogical/sematic >> how can it be a misreading if I intentionally program its possibility? Isn't that more properly understood as an alternate reading rather than a misreading?
It returns us to jon ippolito's focus on "behavior" (rather than "process") and to turings focus on human-like behavior (rather than "human-ness"). There is a part in Wittgenstein's Philosophical Investigations where he more or less arrives at the conclusion that behaving as if you understand and "understanding" just as soon be the same thing, because how could one person ever "objectively" assess whether another person "actually/essentially" understood, other than by assessing their resultant behavior?
Pragmatically, misreading matters not based on how "pure" or "authentic" or "human" (whatever those things are) it is. Misreading matters when it leads to something happening in the world that matters. To bring this back to the topic of the month -- misreading matters when its results are performed (in ways that matter). So a hermetically sealed digital compiler aleatorically interpreting intentionally vague programming code and compiling it X way one time and Y way another next time based on a system of interpretive rules (however quasi-emergent/generative) -- that's cute. But only to the degree that it begins to connect to larger contexts in the larger world does it begin to matter.
Which is one reason I enjoy making art that modulates back and forth between human bodies and 'puter systems in lived space time. The chances of misreading exponentially increase. But then I also enjoy conversations at coffee houses and bars for the same reason. The more beer, the more qualitative modulation of affective linguistic slippage, until you finally slip into ye olde (boring) binary disconnect.
On Mar 21, 2014, at 8:04 PM, Rob Myers wrote:
> On 21/03/14 02:03 PM, Curt Cloninger wrote:
>> Something that stuck with me in the dialogue so far which seems
> important (and I don't recall who introduced it), is the idea of a
> compiler or an interpreter that simply refuses to compile syntactically
> malformed code. This really foregrounds the implicit difference between
> "code" (as in computer programming languages) and "language" (as in
> "natural" human languages uttered/written in the world). Theoretically,
> programming code can have all the robust, affective wiggle room of human
> languages -- in other words, it can have the ability to be "misread."
> The meaning of Perl code varies by context:
> Multimethod dispatch algorithms deal with resolving ambiguity and intent
> (if Yaxu is reading this I'm sure he can relate this to strong static
> type hierachies in functional languages):
> And if anyone remembers Prolog, that resolves logical constraints and in
> the right circumstances can give (many) more than one answer to a question:
> Nondeterministic programming languages can simulate the chance and
> effect drift functions of misreadings:
> But these all feel like weak functional replacements for misreading.
> Which raises the question of whether misreading is necessary to get the
> effects of misreading: can rewriting or intentional ambiguity provide
> the same effects, or is there something either functionally or morally
> unique to the idea of misreading?
> - Rob.