From: Georgia Wolf 2
> condensing or expanding *normal* sounds.
That's an idea.
> Another, thought was if in one ear certain sounds that are
> easier to anicipate or to recognize and that don't require
> attention were being received and in the other ear
Yes, maybe give one ear higher priority information (like nouns),
and the other ear lower stuff (like articles). On the other hand,
articles are often superfluous anyway, so maybe [intelligently]
getting rid of most of them during text-to-speech is another
way to increase "reahing" spead.
From: "Christopher R. Maden"
> _The Mind's I_ describes an experiment like this.
> If the left channel is ambiguous, but the right
> channel has auxiliary information, the listeners
> will overwhelmingly describe the ambiguous text
> in the sense suggested by the auxiliary channel
Cool, so it works!
Now, who's going to write a text-to-speech algorithm which
goes at the speed of reading (or faster, since unlike reading
there's no physical movement involved*)? Considering that text-
to-speech is going to be big soon (although I'm getting the
feeling that much of it is hype, not least because privacy
is important), I bet people are already on it.
* However, one complexity is providing a mechanism for
executing a regression (necessary at high speeds, since
mistakes are inherent): during reading you just look "back".
During "reahing", maybe you could make a [funny] sound?
> we can only *concentrate*, consciously, on one.
But that's the same in reading: if you ask a person to
read consciously, he can only read one word at a time.
But if you ask him to "just read", he flies. I think it
might just be a matter of training people, preferably at
an early age (immersive reading normally starts at about
the age of 10).
So if we simply *expose* people to "reahing", I bet it'd work.