## Monday, June 6, 2016

### Tone Deaf and All Thumbs: Proprioception of noncommutative quantum time-frequency consciousness

I stumbled onto a fascinating music book today in the library - turns out it was written by a neurologist

That explains its foray into photon transduction into electrical signals compared to sound transduction into electrical signals. googlebook review links on the light analysis in that book

### Tone Deaf and All Thumbs? An Invitation to Music-Making: Frank R ...

www.amazon.com › Books › Arts & Photography › Music
Amazon.com, Inc.
Convinced that everyone has an inborn ability to make music (a "biological guarantee of musicianship"),

Indeed I would agree with this. For example I once performed Rachmaninoff's piano prelude - number 10 I think it was - for a church service! I was being subversive since as far as I know Rachmaninoff was not a believer yet my take is that music is inherently spiritual. After the performance and service one of our paid vocalists - a female soprano - thanked me and said I should realize that I had talent as a gift from God. I would agree but add that everyone has music talent as a gift from God - only not necessarily as defined by Western music tuning! haha. On a side note I was notoriously singing off key in the church choir - flat - and it drove the male professional paid vocalists crazy. They were skeptical of my piano abilities. haha. But I also was singing in a hardcore punk noise band at the time which destroyed my capacity for equal-tempered tuning. Thankfully.

Now then back to the topic at hand. So I had an interesting precognitive music experience this past holiday - near the Solstice when the energy is very strong. I was at my relatives for their Christmas party and my cousin was playing piano and my aunt also played and I declined. I had mentioned - asked my cousin if he had been at my classical piano concert when I finished high school - he had not but his dad was there, and said it was "crazy." Indeed I played by memory several intense classical pieces - a Bach concerto, Mozart, Brahms. Then I did avant-garde - John Cage and my own compositions. Now my cousin is a semi-professional keyboardist in a well-known party hippy trance blues band in the Twin Cities and his dad was also a  semi-professional musician in a similar blues band. My cousin lamented the fact that he can not really read piano music. I quickly answered actually it is better to play by ear. My aunt and cousin were surprised to hear me say this and my aunt asked if I could play be ear. I said not really. haha.

The interesting thing is this whole party I had already experienced as a precognitive dream. I never mentioned this to anyone since traditional Western science would not allow this possibility - except as I have recently referenced quantum biologist Stuart Hameroff acknowledging precognition as being true. Anyway just to test myself I asked my aunt if she had held a similar party last year at Christmas and she said no. So unless the same exact party had happened years ago - which I doubt - then I experienced several episodes that were precognitive including that conversation about music.

So is there any evidence I have that "playing by ear" is indeed better than playing by sight - reading music? Noncommutative geometry quantum unified field mathematician Alain Connes has detailed how reading multiple clefs of music simultaneously transposing the different frequencies at the same time is the best model of what the quantum computing future will look like. He said it appears to be "schizoid" by current mainstream science terms. I have taken him to task on his claim in only that 1) I have done that orchestration training and so I know exactly how much of a brain twister it is and indeed musicians are proven to have enlarged corpus callosums that connect the right and left brain for integrating time and frequency. 2) Connes is only considering written music while denigrating the "aural tradition" of music which he considers not to be polyphonic since it doesn't have logarithmic equal-tempered tuning. This is quite ironic by Connes since as I have detailed equal-tempered tuning is commutative and Connes promotes non-commutative spacetime! haha. He doesn't even realize his error since he apparently does not know music theory well enough. More importantly because time-frequency is non-commutative than that means the empirical truth is an infinite transduction of frequency energy.

This brings us back to the neurologist. But first let's look at some examples of people learning by the "aural tradition" of music and then translating that back into the Western visual tradition of "reading music" - only these people are blind and so they could never read music! Yet their abilities are considered the best in terms of their instrument - and it is the piano, the same one as is the focus of the neurologist Frank R. Wilson. O.K. so the first example is a famous one - Art Tatum.

Art Tatum is a "musician's musician." He was blind and so he learned by the supposed inferior "aural" tradition that Connes denigrates. Yet Art Tatum was acknowledged by other famous musicians as the best piano alive at the time (and maybe in all time!!). I'm pretty sure I saw Vladimir Horowitz perform when I was young. But if I had seen Art Tatum I'm sure I would have remembered it - even if I was very young! Horowitz, albeit a very famous classical pianist, would agree:

### Horowitz and Tatum – when jazz met classical

www.classicalmusicblogspot.com/horowitz-and-tatum-when-jazz-met-classical/
Jul 25, 2012 - On another occasion, when asked who the greatest pianist in the world was, Horowitz replied without hesitation, 'Art Tatum'. Horowitz also said ..

People then think - o.k. but Tatum just played jazz. Nope he was once challenged about this and he instantly played Bach perfectly and even improvised on Bach. I used to "jazz" up Bach myself. haha.

Rachmaninoff is probably my favorite classical pianist and composer and he deferred to Art Tatum! Fats Waller is my favorite modern composer - I used to have his complete set of music as a C.D. collection, he also deferred to Art Tatum.

### "I only play the piano, but tonight God is in the house" -- Fats Waller ...

www.metafilter.com/.../I-only-play-the-piano-but-tonight-God-is-in-the-ho...
MetaFilter
Oct 25, 2011 - 34 posts - ‎22 authors
Rachmaninoff wasn't the only classical (or even non-jazz-associated) musician to regard Art Tatum highly. Vladimir Horowitz though highly of ...

I think I only listened to Art Tatum a couple times - I got an album from the library. The thing about Art Tatum is that he is "inhuman" in his capabilities - he stuns professional musicians into silence!! So you listen to him and it's almost impenetrable - unrelated-able.

O.K. enough said - except we find another staggering talent who also is blind and learned to "play by ear."

Derek Paravicini - he is six years younger than me. Now he has the savant ability of a photographic memory - only he can not see and so the "photograph" is internal visualization. He is shown to never hear Art Tatum before and yet reproduces Art Tatum's ability instantly!

### Derek Paravicini The Musical Genius - Autistic and Pitch-Perfect

mymultiplesclerosis.co.uk/ep/derek-paravicini-the-musical-genius/
Jul 8, 2015 - People compare Derek Paravicini with Art Tatum, the great, blind, jazz pianist from the early 20th century. His name is occasionally mis-spelt as ...
So I think his most famous performance is Art Tatum's "Tiger Rag."

O.K. it's over-whelming so I can't listen while I type. Now the thing about Derek is that as a savant he is right brain dominant and so his left-brain "intentional planning" prefrontal cortex skills are highly limited. But we also know from other blind people that the visual cortex is then hijacked as it were and retrained through brain plasticity to transduce the auditory cortex into visual information. This works via proprioception - the inner ear vagus nerve connection to the deep brain thalamus. The fascinating thing about Daniel Kish is his left-brain prefrontal cortex is also highly adept so he is very articulate in translating his right brain skill back into intentional logic. And so Daniel Kish has trained many other blind people to retune their auditory cortex "by ear" into literally seeing the external world from a sonar echolocation visualization - what Kish calls "flash sonar."

Now the big point of the neurologist author of Tone Deaf and All Thumbs is that despite the ear's ability to transduce a greater range of information than the eye - based on the range of energy frequency that each organ transduces, the eye has so much of a faster speed based on light that it overcomes the ear's ability. This would appear to give the edge to the visual capability of "reading music" but as I have just documented the best musicians are those who can not visually read music! Also it is documented that a person responds at a faster speed to sound than to vision - ironically - and this is why gun shots are used to start races instead of a visual signal. speed of thought

Although reaction time tends to decrease as the loudness of the “go” increases, there appears to be a critical point in the range of 120-124 decibels where an additional decrease of approximately 18 ms can occur. That’s because sounds this loud can generate the “startle” response and trigger a pre-planned sprinting response. Researchers think this triggered response emerges through activation of neural centers in the brain stem. These startle-elicited responses may be quicker because they involve a relatively shorter and less complex neural system – one that does not necessarily require the signal to travel all the way up to the more complex structures of the cerebral cortex.

What is really going on here is how our brains need to process the visual information - our auditory cortex is adapted to living in the forest where three dimensional depth is not that available and so hearing is more important. Burkhard Bilger, “The Possibilian: What a brush with death taught David Eagleman about the mysteries of time and the brain.” April 25, 2011, The New Yorker, April 25, 2011.
Our ears and auditory cortex can process a signal forty milliseconds faster than our eyes and visual cortex—more than making up for the speed of light. It’s another vestige, perhaps, of our days in the jungle, when we’d hear the tiger long before we’d see it.
Just as with birds, primates have their auditory cortex very closely aligned with the motor cortex connected to the cerebrospinal neurohormones. This translates into "playing by ear" enabling a faster processing time for the body to perform then reading music visually.

A new study has confirmed the brain can process images at 13 milliseconds but takes much longer to move the eye or the body in reaction
This ability to identify images seen so briefly may help the brain as it decides where to focus the eyes, which dart from point to point in brief movements called fixations about three times per second, Potter says. Deciding where to move the eyes can take 100 to 140 milliseconds, so very high-speed understanding must occur before that.
There is another component to hearing now that needs to be considered and this is the "phase" change - not just the frequency or time - but the shift in time to compare frequency between the right and left ear as stereophony. This is another order of magnitude faster than just hearing time or frequency alone and the body reacting for the ear - microseconds versus milliseconds.

Dr. Mae-Wan Ho directly correlated this quantum phase synchrony of inner ear hearing to the skill of a pianist - quantum consciousness displayed on a macro-level!

The reason macroscopic organs such as the four limbs can be coordinated is that each is individually a coherent whole, so that a definite phase relationship can be maintained among them. The hand-eye coordination required for the accomplished pianist is extremely impressive, but depends on the same inherent coherence of the subsystems which, I suggest, enables instantaneous intercommunication to occur. There simply isn't time enough, from one musical phrase to the next, for inputs to be sent to the brain, there to be integrated, and coordinated outputs to be sent back to the hands (c.f. Hebb [28]).
Sounds presented in linear sequences are recognized as speech or music, much as objects in motion are recognized as such, rather than as disconnected configurations of light and shadow. How is this unity structured so that not only can we recognize whole objects, but distinguish different objects in our perceptual field? That is the problem of binding and reciprocally, of segmentation [35].

Now remember that quantum physicist B.J. Hiley has made this same comparison of quantum consciousness to listening to and understanding music. Dr. Mae-Wan Ho then reveals the secret:
The degree of precision may be estimated by considering our ability to locate the source of a sound by stereophony. Some experimental findings show that the arrival times of sound pulses at the two ears can be discriminated with an accuracy of a very few microseconds [see ref. 1]. For detecting a note in middle C, the phase difference in a microsecond is 4.4 x 10-4. Accurate phase detection is characteristic of a system operating under quantum coherence.
And so now consider the ability of blind people to harness and leverage their auditory cortex to be able to see by echolocation and it all starts to make sense in terms of quantum consciousness:

Marcer [40, 41] has proposed a "quantum holographic" model of consciousness in which perception involves the conversion of an interference pattern (between a coherent wave-field generated by the perceiver and the wave-field reflected off the perceived) to an object image that is coincident with the object itself. This is accomplished by a process known as phase conjugation, whereby the wave reflected from the object is returned (by the perceiver) along its path to form an image where the object is situated. The perceiving being is into the act of perceiving, as Freeman [2] observes.

Marcer uses the example of listening to your finger snapping in front of you - the sound is perceived as "inside" you and similarly the clicking noises that the "flash sonar" echolocation technique utilize then transduce external reality into an internal holograph - the external and internal are then interwoven through the non-local quantum consciousness.

Now then quantum consciousness research Dr. Stuart Hameroff has documented that ultrasound resonates the whole brain as self-consciousness of the microtubules. The "hypersonic effect" of ultrasound in natural harmonics of music is documented to increase alpha brain waves. Why does this work and how is it related to the opening of the third eye for holographic vision? The microtubules according to Hameroff operate the quantum consciousness and the microtubules are made of collagen which is piezoelectric and so gives off and resonates with ultrasound. And what is the inverse wavelength of the ultrasound frequency?

### Ultrasound Tutorial - RIT Center for Imaging Science

https://www.cis.rit.edu/.../ultrasound/ult...
Chester F. Carlson Center for Imaging Science
Inverse of frequency ? if frequency increases period decreases. For example ... Wavelength (mm) = Propagation Speed (mm/microsecond) / Frequency(MHz).

### The principle of ultrasound - ECHOpedia

www.echopedia.org/wiki/The_principle_of_ultrasound
Sep 1, 2015 - Frequency is the inverse of the period and is defined by a number of ... Wavelength (mm) = Propagation speed in tissue (mm/microsecond) ...

That's right - it's propagation speed is in microseconds as phase - the same phase speed that the two ears can synchronize. Young humans can hear ultrasound and normal adults prefer the natural harmonics that resonate into ultrasound - the hypersonic effect - which increases alpha brain waves.
[1] Ashihara, K. (2007); “Hearing threshold for pure tones above 16 kHz”, J. Acoust. Soc. Am., 122(3), pp 52 [2] Henry, K. R., Fast, G. A. (1984), “Ultrahigh-Frequency Auditory Thresholds in Young Adults: Reliable Responses up to 24 kHz with a Quasi-Free-Field Technique”, Audiology 23, pp  477-489.

Hameroff reported his mood elevated from ultrasound - this would indicated increased dopamine and serotonin. We know that the "frisson" skin orgasm from music is the increased dopamine from inner ear activation of the vagus nerve. But since we know ultrasonic harmonics in music increases the alpha brain waves and we know increased alpha brain waves increases serotonin we can infer the high mood bliss Dr. Hameroff experienced is increased alpha brain waves.

The alpha brain waves being dominant are then very close to the deeper theta brain waves that activate the R.E.M. vision state for photographic memory and long term memory storage known as "long term potentiation." Dr. Stephen Porges teaches "flexing of the middle ear" that focuses the hearing on higher frequency which also activates the vagus nerve for increased relaxation and feel good hormones. According to the inner ear research of Dr. Andrija Puharich it is then ultrasound which has ELF waves as the subharmonics from the transduction of ultrasound into electrical signals with magnetic fields that cause the hydrogen to precess as a quantum spin or magnetic moment creating a micro black hole information storage system. This process harmonizing the ELF waves is in tune with the theta REM visionary state.

Now we have the direct connection between the quantum brain, the third eye and macroquantum entanglement via the inner ear proprioception focus of frequency. As I recently detailed - the left hand keeps the beat as time via the cerebellum which also cordinates body movement and emotion and this activates the right brain cerebrum for also visualization as frequency information guided by the frequency of music. This is the secret effectiveness of trance dance and singing as practiced by humans before left-brain dominant prefrontal language was crytallized. And so radical anthropologist Dr. Chris Knight details how this trance singing "harmonizes the emotions" in synchrony with the lunar phase of the pineal gland and again we see the secret of how humans developed spiritual healing as the focus of their original human culture. As Robin Dunbar confers - it was this group singing which enabled humans to bond in large groups and it's now proven that group singing doubles the oxytocin, the love bonding hormone of the heart that is increased from the increased vagus nerve activation of serotonin.

### The Social Origins of Language - Page 10 - Google Books Result

Daniel Dor, ‎Chris Knight, ‎Jerome Lewis - 2014 - ‎Language Arts & Disciplines
Chris Knight, Jerome Lewis Daniel Dor ... began serving additional internal functions: choral singing harmonized emotions and built trust within the group.

This is why music is a natural God-given talent for all humans - we are hard-wired to enjoy music as our secret of social bonding and our secret to quantum consciousness as spiritual healing.

So also it's been proven that human hearing "beats" time-frequency uncertainty since human hearing is nonlinear. This has caused audiophiles to consider again the microsecond phase synchronization of stereophonic hearing from ultrasound harmonics - some audiophiles now take into account ultrasound harmonics in their digital programming to create this holographic phase synchronization effect similar to actual live music. Even though humans are not considered able to "hear" ultrasound the phase synchronization is considered a subconscious reaction of the inner ear to ultrasound.

### Patent US20150133716 - Hearing devices based on the plasticity of ...

May 14, 2015 - The invention describes a hearing improvement device including components ... heard frequencies to the inner ear, bypassing the outer ear and the middle ear ... said brain trainer comprises pairs of ultrasound emitter phased arrays, ... a phase shift between the energy source coil close to the power source ...

### [PDF]Technology Trends in Audio Engineering - Audio Engineering Society

www.aes.org/technical/trends/report2015.pdf
Audio Engineering Society
stem cells in the inner ear to regenerate new hair cells. .... minimum phase designs, increase of pro- cessing bit ... sites in the audiophile market, and then, a broader ... highly focused ultrasound beam was sent through the .... Lip sync standard.

### AES E-Library » Technology Trends in Audio Engineering

www.aes.org › Publications › E-Library
Audio Engineering Society
Feb 10, 2015 - Synchronized Swept-Sine: Theory, Application, and Implementation - October 2015 ..... Minute sounds produced in the ear , and injury from loud noises . ... tech - stem cells in the inner ear to regenerate new damaging sound levels as .... ultrasound beam was sent pled to the optical phase modulation the ...

### [PDF]EARS Project Newsletter No 6 Jan 2015 - Physikalisch-Technische ...

ultrasound frequencies … Second prototype ... Measurement of middle ear transfer function ... an intensive development phase transducers ... frequencies and airborne ultrasound by means of brain ..... but in sync with the sparse-sampling MRI.

O.K. so based on the quantum consciousness research of Dr. Stuart Hameroff what we know is that ultrasound has the highest amplitude peak of whole brain microtubule resonance and this has been corroborated in other research that I sent to Dr. Hameroff previously. Dr. Martin L. Lenhardt, “Ultrasonic Hearing in Humans: Applications for Tinnitus Treatment,” International Tinnitus Journal, Vol. 9, No. 2, 2003, p. 3 and p. 6.
When the imaging beam was focused at the center of the brain, patients reported hearing a
high audio sound, much like tinnitus. When the ultrasonic beam was directed at the ear, the sound disappeared. Setting the brain into resonance resulted in a clear high-pitch, audible sensation consistent with brain resonance in the 11- and 16-kHz range....Because ultrasound produces high audio stimulation by virtue of brain resonance, the direct use of high audio stimulation is more economical in power requirements and still stimulates the brain at resonance.
So Dr. Hameroff has documented how this ultrasound actually is the highest amplitude resonance of the microtubules and Dr. Puharich documented the ultrasound creates magnetic precession of the hydrogen in the ELF waves. Hameroff has argued that the EEG waves are subharmonics of the microtubules and this shows the connection - the alpha brain waves as increased serotonin are from the ELF subharmonics as hydrogen process that resonate the microtubules for quantum micro-black hole information storage.

Dr. Hameroff has then stated the higher you go, the deeper you go - quoting the Beatles - and so the ultrasound is a subharmonic of the higher frequencies of the microtubules. But the key considering here is the quantum phase synchronization - the microsecond speed propagation wavelength inversion of the ultrasound frequency. So then as the microtubule electromagnetic frequency goes higher towards the speed of light as biophotons then the phase synchronization, as per de Broglie, goes to the speed of light squared as the superluminal quantum consciousness that enables precognition.

So then as Dr. Hameroff states, the higher you go, the deeper you go and this indicates again that the EEG are subharmonics so that the higher the microtubule frequency resonance then the deeper the EEG subharmonics. In meditation, as per Ramana Maharshi, then the key is to maintain consciousness - self-awareness of Gamma synchrony, even while in the deep sleep delta waves of 1 hertz. As per the asymmetrical resonance of time-frequency then the right brain increases in frequency into the biophotons with phase synchronization as the speed of light squared for Yuan Qi consciousness consciousness (the precession of the protons) and meanwhile the time of the left brain cerebellum (left hand keeping the beat) goes down to 60 beats per minute - the 1 beat per second that creates the "Mozart Effect" - the photographic memorization trance skill of ancient shamanic training.

One of the key principles of quantum biology nonlinear self-organizing growth is that higher frequencies have much stronger subharmonic amplitudes. This is from the research of Brian Goodwin. here
In the 1965, Brian C. Goodwin published a paper “Oscillatory behavior in enzymatic control processes”[1] that described how two coupled biological oscillators could create a gamut of frequencies by varying the coupling constant. This provided a model for how two oscillators could control the timing for several biological processes all on different timescales. This discovery helped pave the way for our current understanding of both cellular organisms and of coupled oscillators.
We have seen that the possibilities for cell behaviors when oscillators begin to couple to each other is far greater than the frequencies defined by the free running independent oscillators. Things such as subharmonics and beat frequencies shown above could account for many long biological rhythms such as the circadian (24 hour) cycle. A cell can also create primary and subharmonics and pick out which ones it wants to monitor by changing substrate concentrations. This will change the coupling and therefore change which oscillator (1, or 2) has a larger amplitude.
This theory has become very popular in biological physics since in publication, and this paper has been cited over 300 times.

I cited it for my own research:  Professor Brian Goodwin in his book Temporal Organization in Cells (1963) discovered the secret of the snake: “the subharmonic oscillation always shows a considerable increase in amplitude over that of the fundamental oscillators so that a very appreciable amplification can occur.”

And so what we find is that for audio perception outside the normal hearing range there is the creation of ELF subharmonics from ultrasound:

It was determined by applying tones of various frequencies (20 Hz to 250 Hz) to suppress by a fixed amount the generation of oto-acoustic emissions (DPOAE), a distortion product generated by the healthy inner ear in resonance to two beating tones in the 2 kHz range. Assuming that this constant suppression by the LF tone indicates constant LF-displacement amplitude of the basilar membrane at the DPOAE generation site, an equal-output function (similar to the ELCs) was obtained.
Metrology for a universal ear simulator and the perception of non-audible sound 2012

It is considered that tinnitus is also a subharmonic of ultrasound - and it also creates these ELF subharmonics. vagus nerve implant to stop tinnitus

Not much is known about how tinnitus occurs, but it’s possible it’s linked to the otoacoustic emissions, the natural resonances from within the air.

And so how to stop this resonance?

MicroTransponder, a company that’s  been working on a similar device for some time, has proven an implant could train the brain to stop tinnitus....The implant is targeting the cranial nerve X, or the vagus nerve, which is the tenth major nerve that exits the brain.
Activate the vagus nerve.