Dr. Keith Waldron's presentation from the 2019 San Diego Pain Summit


Dear Barrett:

I think I’m in over my head. When Rajam approached me last year to speak in 2019, I hesitated. Actually, I said no. I didn’t have anything to offer. I didn’t belong on a stage. I certainly didn’t belong behind a lectern. At least that is what I was thinking ... until I wasn’t. But one person tried to talk me into it. Then another, and another. And you sure as hell didn’t try to talk me out of it when I talked it over with you. Next thing I knew I was submitting a proposal, which Rajam quickly accepted.


But now, with less than 4 months until the date of my talk, I’m having second thoughts. I don’t consider myself an expert in working with patients with pain; shit, that is why I still travel to San Diego every winter. I go to San Diego so that I can learn from people smarter than me. But I’m not a researcher. I’m definitely not the smartest person in the room, especially that room. I haven’t published any papers or books, not even a chapter. I’m not a lecturer. I don’t sell continuing education courses. I am not especially charismatic. I often stumble over my words. What the hell does a meager physiotherapy-trained home health administrator in a 3-county regional home care agency in the middle of nowhere have to offer some of the most progressive minds in the fields of physio-, occupational- and massage-therapy? Or chiropractics? Or medicine? Hell, my own staff tuned me out last year when I disseminated the American Physical Therapy Association’s Home Health Pain Management materials, and I was a co-author! … I suppose I probably shouldn’t lead with any of that, huh? :)


I don’t know what people want to hear, or how they want to hear it. I mean, hell, I am going to be presenting on the same stage as Antonio Damascio. Descartes Error was one of my first introductions to some of the neuroscience literature when I first got the itch to really pursue some readings on the subject. I loved reading that book and pondering about the differences between feelings and emotions and what that might mean. Now I’ll be standing on the same stage at a conference. No pressure or anything, right?


When the schedule was released, the first thing I did was check to see if I had to go on after him. Thankfully, I don’t. But I DO have to follow Tim Beames, so - you know - win some, lose some, I guess. When I think about it, though, I suppose that there is no real good spot to be scheduled. The fact of the matter is that I am swinging well above my weight, Barrett and I’ve decided to blame you.


I blame your daily blog entries on SomaSimple. I especially blame that damned - but brilliantly simple - aphorism. It should be plastered over the desktop of every clinician, professional, and human-being who works with patients with painful complaints. It resonates with me still. You wrote:


“When the primary complaint is pain, the treatment of pain should be primary.”


I remember sitting at my home office PC while I read those words. I just leaned back in my chair, looked up to the ceiling and said to myself, “Aw, fuck.” Instantly, I realized that I had been doing things wrong and I wanted oh-so-badly to be doing things right. I read and searched more. I read Gifford’s Evolutionary Reasoning chapter in Topical Issues on Pain and was floored by how wrong my conception of pain had been. I read Melzack’s Neuromatrix paper and was all-in. There are a couple of guys in Australia who have a published a book that tried to explain things in a way that made more sense too. I felt empowered.  I was explaining pain to anyone who would listen; which turned out to be not-too-many people. Which, of course, is how I ended up in home health - that is your fault too, you know? But I won’t blame you for my move into middle management; that one is my fault alone.


You encouraged me to read. You encouraged me to write. To empathize. To wear a neck-tie. To write again after I had stopped. To become less hair dresser, more therapist. You encouraged me to think critically. To pay attention … to everything. To become more than I was. To strive to be the most well-read person in the room. To develop a premise.


Damn you and that notion of premise, Barrett. I have spent the last 5 years trying to develop a premise that makes sense to me, but the more I learn and the deeper I dive, the less sense it all makes.


Don’t mistake me, I am unquestionably in a better place now than I was. For instance, I am thankful for you bringing my attention to the essay by Rachel Naomi Remen, titled “In The Service of Life”. Her brief writing remains one of the most important things that I have ever read.


I was reminded of it often last year at San Diego, but at no time more than during the patient experience panel, when the mother of a teenager with chronic pain desperately sought guidance from the panel. I was walking the room that day; I handed her the microphone and knelt down beside her before she bravely stood up in the middle of a large room - in front of hundreds of strangers in San Diego and online - exposed, tearful, exasperated, and at a loss. When she stood up, she didn’t know how to help her daughter; she returned to her seat powerless, still carrying the same burden she brought with her.


Her emotion and humility laid in stark contrast to the righteous and self-congratulatory tone emanating from the summit’s participants. Of course, I was one of those folks, maybe even one of the most righteous - I don’t know, but I was acting as a good physio, because I was seeking information on the latest in research and theories about the treatment of pain. While there were countless physios participating in scientifically unsubstantiated continuing education courses that same weekend on skin-scraping, kinesio-taping, visceral manipulation, or postural syndromes, we were learning legitimate stuff. Important stuff. Cutting edge stuff. We were the smart ones, that’s why we were there. But if we were so fucking smart, why the hell was that mother still crying when she sat back in her seat with no better direction than before she had risen?


It occurs to me now that she came to the summit with a false premise, and it isn’t her fault; it is the same premise that most clinicians work from. She came to the 2018 Summit thinking that we offered help, but we don’t. Not really, anyway, and when we do, we are doing our jobs wrong, I think. I suspect she was disappointed at best. I dare not think of how she felt, or what she thought, at worst.


The problem, of course, is that we shouldn’t aim to help. 20 years ago, Remen asked us if, “Perhaps the real question is not ‘How can I help?’ but ‘How can I serve?’” Most of the time, our professions fail to see the distinction, other times we simply fail to acknowledge it. But the distinction is an important one. One that we should keep at the forefront of our minds when we engage with our patients. Remen wrote:


“Serving is different from helping. Helping is based on inequality; it is not a relationship between equals. When you help you use your own strength to help those of lesser strength. If I'm attentive to what is going on inside of me when I'm helping, I find that I'm always helping someone who is not as strong as I am, who is needier than I am. People feel this inequality. When we help we may inadvertently take away from people more than we could ever give them; we may diminish their self-esteem, their sense of worth, integrity and wholeness. When I help I am very aware of my own strength. But we don't serve with our strength, we serve with ourselves. We draw from all of our experiences. Our limitations serve, our wounds serve, even our darkness can serve. The wholeness in us serves the wholeness in others and the wholeness in life. The wholeness in you is the same as the wholeness in me. Service is a relationship between equals.”


She continues to say:

“Serving is also different from fixing. When I fix a person I perceive them as broken, and their brokenness requires me to act. When I serve I see and trust that wholeness. It is what I am responding to and collaborating with. ... If helping is an experience of strength, fixing is an experience of mastery and expertise. Service, on the other hand, is an experience of mystery, surrender, and awe. A fixer has the illusion of being casual. ... We fix and help many different things in our lifetimes, but when we serve we are always serving the same thing. Everyone who has ever served through the history of time serves the same thing.”  


She finishes her essay by providing us with the realization that she is writing from a perspective that we should all hear, consider, and contemplate. She writes:


"In 40 years of chronic illness I have been helped by many people and fixed by a great many others who did not recognize my wholeness. All that fixing and helping left me wounded in some important and fundamental ways. Only service heals."


I think I’ll argue that genuine and authentic service is what happens in the intersubjective space. The 3rd-space, is what Quintner calls it. It is the same space that I had hoped could be rescued from the bio-, psycho-, -social reductive splintering of a patient’s personhood. And yeah, I know that some folks think that I am tilting at windmills here – and I have had conversations with people far smarter than I am who think the distinction isn’t meaningful. And maybe they are right, but I can’t help but think that referring to our approach as humanistic, rather than biopsychosocial, has the potential (at least for a pedantic, like myself) to actively remind oneself that their patient is an emergent being, not the sum of a few discreet parts that can be itemized and categorized in research studies and patient charts.


In such a manner, Matthew Low, the physio involved with CauseHealth across the pond, has spoken and written eloquently about using a dispositional clinical approach to his physio practice; I can’t help but think that he is onto something. He too uses the term humanistic to describe his care, but he does so with a little more fanfare than I ever could, I think - which is a good thing, because he is very deserving of the praise. Come to think of it, I should encourage Rajam to seek him out in 2020.


As best I understand it - which probably means not much at all - the dominant view (or model?) of causation in science and medicine is derived from the philosophy of Hume, who proposed that we can’t really say that one thing causes another, only that two events seems to be occur in succession, one often follows another. This line of thinking seems to be good fit with how medicine studies patients and interventions. It is the stuff that RCTs are born from, after all. Our studies are seeking for and investigating regularities, not causes.


And most philosophers agree that regularity cannot (usually) be determined from a single instance. So usually, we need a multitude of instances or events to begin to see a patterns of regularity, to begin to say that one event follows another. This is how we end up with and derive our own hierarchy of evidence. You know, the pyramid that informs us that case studies carry less weight than cross-sectional studies … eventually ascending to RCTs and systematic reviews at the top of the pyramid - the apex of quality of evidence - because the bigger the sample, the more confidence we can have in the regularities discovered (or not). One challenge to these methods of study, however, is that they only tell us about people and interventions on average. They don’t tell us about a specific patient in their specific circumstances. No, we must rely on the wisdom of the clinician for responsible clinical application of research findings with each individual patient.


Of course, another reason we prefer these RCTs is that there is less risk of bias, because the studies are conducted with increasing rigor as we ascend the hierarchy of evidence as well. But that isn’t necessarily the case. What we fail to recognize is the implicit bias that causation and regularities are as Hume described it. But the folks involved with CauseHealth are openly asking the question, “Is it really so?”


Some of the contributors from the study of philosophy propose that, instead of Hume’s take on causation as regularities, perhaps we should consider a dispositionalist view of causation. For instance, if we conceive of things that have powers (or tendencies) in complex circumstances, we can re-conceptualize the idea of causation as something that is non-linear, whereby individual objects contain properties, or powers, that are responsible for effects (for instance, a glass has a power or disposition of fragility, a propensity to break when dropped onto the floor). This seems so intuitive to me, I fear it might be wrong.


It seems that such an account of causation explains why smoking causes cancer, but not every time. Lung cancer tends to follow smoking, given the right - albeit unfortunate - circumstances. It seems that such a view of causation might be able to explain a variety of unexplained symptoms inclusive of irritable bowel, headaches, chronic fatigue, and yes... chronic pain.


Advocates for this view of causation propose that we could imagine a series of powers, viewed as vectors, pointed in opposite directions. In the case of a smoker, for instance, perhaps there is one large power (smoking) that points to the left, in the direction of cancer, but perhaps there are enough smaller powers (dosing, genetics, and activity levels come to mind) pointing to the right, and the sum of their power exceeds that of the smoking power, so the patient doesn’t develop cancer after all. Could the same be true for patients with painful complaints?


It is with this principle in mind that Low wrote his latest article, titled, “Managing Complexity in Musculoskeletal Conditions: Reflections from a Physiotherapist.” As I read it, I smiled and thought, “Ah yes, maybe here we have the beginning of the answer.” That is, of course, until I recalled a paper from Jeffrey Bishop that made me realize that my hopes, while admirable in my own mind, will - in all likelihood - ultimately fall short.

You see, Cartesian dualism still pervades our work, even when we try to ignore the mind/body divide. Medicine (inclusive of therapy too, depending on the practitioner) – it is a science after all. Science works from a basic design – observers observing the observed. The world, our patients included, are therefore considered manipulable to science. We do not, as Bishop asserts, “escape instrumental thinking about humans. Indeed it is because humanism is added to the biologism of medicine that it consummates its metaphysical relationship to medicine by asserting its usefulness to medicine. In a way, narrative medicine becomes a tool that gains the trust of a patient, a more subtle tool because it masquerades as authentic relationship.

He argues that medicine necessarily views the patient as an animal-machine in need of something else. Think about it for a minute. Medicine is an intervention – we add something to the patient to make them more complete, more robust. Sounds a lot like fixing, doesn’t it?


How do we serve our patients within the flawed paradigm of a fixing-profession? All the patient-centered model does is add more to the biopsychosocial approach, under the guise of giving the patient more control. When that hasn’t worked, we have added on something else still: the patient narrative. But even then, the patient’s narrative becomes instrumental; the more we understand our patients, after all, the more likely they are to do what we need them to do so that we can achieve what we consider to be a positive outcome. Ultimately, even with the best of intentions, it is about controlling yet another variable. And sure, maybe it works better than looking at the patient before us as only a biological entity, but it seems that it still has the potential to fall short of service in the manner that we should try to strive for.


Of course now I am talking about ideas that aren’t about science, I am talking about how we become who we aim to be. Who we should be. Who and what our patients are. This is the stuff of philosophy, not science. At least not in the way we often conceive of science today.


I suppose that this is the less-scientific thinking that has fueled my inquisitiveness these last few years. After all, it didn’t take me long to grow tired of neuroscience readings. That isn’t to say that it was beneath me. To the contrary, a lot of it was over my head. Initially, I had to revisit old texts that had gathered dust and spider eggs in my basement. I had to spend a ton of time revisiting the basics - some of which I confess that I had never learned at all - in an effort to discern what some of the authors were writing about. Some were easier to understand than others, but each author seemed to arrive to similar conclusions: activity in area A, region B, or network C strongly correlates with behavior X, sensation Y, or choice Z.


So essentially, neuroscience - which is admittedly very much in its infancy as far as sciences are concerned - explains things as, “Well, physics happens and then - POOF! - experience.” It’s the ‘POOF!’ - that moment when electricity becomes experience - that I had the most interest in, because that is where pain happens. And while neuroscience has excelled at reduction - digging deeper and looking with finer detail to explain many phenomena - it is that black box that is of greatest interest to me. It is that black box that might one day explain consciousness and experience, of which pain is only a small aspect.


And it is that black box which should feed our humility. Seriously, think about it: our jobs are to interact with another human being to treat an ailment that we don’t even have access to. Yet - the treatment of a patient’s pain should still be primary, right?


I am reminded of deep-thinking college freshman in the Philosophy 101 classes asking, “How do you know that the red that you see is the same red that I see?”  In the same vein, isn’t providing care for the treatment of painful complaints akin to someone walking into an opthamologists office seeking treatment for their perceptions of reds as more saturated than they used to be. Or than they want them to be? What would that mean, exactly? How would we measure that? How would we identify what is normal? How would we decide to intervene? How does one treat consciousness?


Science - and neuroscience especially - is (to date, mind you) really good at micro-views and perspectives. It is good at collecting correlation data, but it still struggles with the more macro questions. Questions like pain. Questions to problems that we - as providers - are expected to deal with daily. Questions that we need answered if we are ultimately going to become proficient in providing care to individuals with painful complaints. So I started exploring some literature that ran adjacent to the more-straightforward neuroscience readings. I started to look more into neuro-philosophy.


Ugh - I remember being so resentful of Dr. John Quintner when he commented to Eric Kruger in a BodyInMind thread. He said, “A word of caution – exploring the realms of consciousness is a job for highly experienced neuro-philosophers. For us lesser mortals, it can be a “swamp” from which few of us will emerge any the wiser.” But now, 4 years later, I must confess that he was mostly right. Perhaps Eric is having more success as a greater-mortal-than-I, but I remain stuck waist-high in the muck of confusion and I see no dry land in sight. But he was also wrong, there is a little bit of wisdom to be found waist high in the muck as well.


Over these last 5, 6, 7 years, I have discovered not only what I don’t know, but also what I can’t possibly know. There are people out there who are have spent their entire academic lives researching and contemplating subjects that - after walking the dog, eating breakfast, putting the kids on the bus, going to work, coming home, shuttling the kids around, cleaning the dishes, spending a little time with Christine - I get to read an occasional book on a single subject every few weeks that merely glosses over the material that an expert would scoff at as elementary.


Look at the field of embodied cognition, for instance. The names in this field of study, if one were to pay attention to such matters, are intellectual heavyweights in every sense. Varella, Thompson, Clark, Noe. And they are all writing about concepts and hypotheses that run contrary to the standard conceptual view of cognitive neuroscience. How the hell am I supposed to parse out who is right? Who is wrong?


These are folks that I admire, even if I don’t know if I can or should agree with them. But let’s be honest too: standard cognitive neuroscience, while sounding quite quaint when someone refers to it as ‘standard’, appears to have offered us legitimate explanations for phenomena at a staggering rate. Perception. Attention. Memory. Language. Problem solving. Categorization. Each domain is currently explained - in large part - by standard cognitive neuroscience, a representational model that explains how the nervous system works.


As I understand it, the prevailing standard view of the central nervous system is as a symbol manipulating device. A computer of sorts. At a physical-level, neurons are individual cells that we can identify and label, but - functionally, in cognitive neuroscience - they act as symbols. They stand for, or represent, things in the world via stimulation of bodily receptors in the periphery. The brain is now assumed to be computational. It is studied by means-end analysis. If we understand how the brain computates, we understand how the brain works, how you and I think, or cognize. Lawrence Shapiro says, “All the ‘action’, so to speak, begins and ends where the computational processes touch the world. The cause of the inputs and effect on the world that the outputs produce are, from this perspective, irrelevant for the purposes of understanding the computational process taking place in the space between input and output … [standard cognitive neuroscience] tend[s] to draw the boundaries of cognition at the same place a computer scientist might draw the boundaries of computation - at the points of interface with the world … Cognition is computation, computation happens over symbols, symbols begin with inputs to the brain and outputs from the brain, and so it in is the brain alone that cognition takes place and it is with the brain alone that cognitive science need concern itself.


A computer is a passive receiver of information. People certainly are not.


But still - even if our current means or concepts in studying cognitive sciences were to be modified, altered, tweaked, or replaced, to what extent?


Varella, Thompson, and others advocate for viewing cognition not as computation, but as embodied action. That is to say that they aim “to emphasize once again that sensory and motor processes, perception and action, are fundamentally inseparable.” As you and I move through a room our movement will produce opportunities for new perceptions, while simultaneously eliminating the opportunities for others. By the same logic, the new perceptions reveal opportunities for new movements. So, movement influences and constrains perceptions which in turn influence and constrain available motions, which then constrain new perceptions, and yada, yada, yada, maybe I’m not just my brain after all. In this view, an action doesn’t result as an output from a perceived input, but rather movement and perception are inextricably linked, they determine each other.

But there are still others who think that the standard computational view falls short in other ways. Lakoff and Johnson argue that “the peculiar nature of our bodies shapes our very possibilities for conceptualization and categorization,” and that, “language shapes the way we think, and determines what we think about.” They somewhat famously assert that within language, the “essence of metaphor is understanding and experiencing one kind of thing or experience in terms of another,” a round-a-bout means of expanding how we may understand experiences beyond basic concepts.


It is the basic-concepts-part though, which seems most interesting to me at the moment. Their notion of basic concepts appear to hinge on concepts of relation. Up. Down. Front. Back. Left. Right. These “basic” language concepts do not require metaphor for understanding, only experience.


But what does ‘left’ mean to a worm? They don’t seem to have a front or back, top or bottom. What about a thought-experiment about a spherical being in outer space. Where is left? Right? Up? Down? Front? Back? Lakoff and Johnson argue that how we think - the language we use, even at the most basic levels - is necessarily embodied.


And there are still others who assert that cognition emerges from a dynamic system with the brain interacting dynamically with(in) a body, dynamically interacting with its environment. In such a view, we can’t really understand activity of the brain without also (at the same time) understanding activities taking place in the body and the world. The boundaries of the cognitive system are thereby expanded; cognition might actually emerge not just from the brain, but from the interaction of multiple dynamic systems (in our case brain, body, and environment). But this feels a bit dualistic, doesn’t it - just with the add-on of the area that the body interacts with? Isn’t it possible to view the entire nervous system (central and peripheral) as a whole, interacting with an environment? Is that a different sort of duality that we could get behind? I don’t think I know.


And what if, rather than expanding the mind, we consider the mind to be merely extended? That is how Andy Clark thinks it all works. In Clark’s view, gestures are part of our cognitive apparatus. So too are scribbles in a notebook. He writes, “These are the cases when we confront a recognizably cognitive process, running in some agent, that creates outputs (speech, gesture, expressive movements, written words) that, recycled as inputs, drive the cognitive process along.” Gestures become a means of spatial reasoning. Another example is my use of my cell phone’s assistant to set reminders to offload my remembrances to a device (same as Otto’s notebook). Words and reminders, in this instance serve as a self-generated external structure on thought and reason. If our memory simply stores bits of information, does it really matter whether those bits of information are stored in wet-ware or a piece of paper or a cell phone?


Are any of these conceptions of mind accurate? Which, if any could be considered the truth? Which are more likely than they are right, and how small does a likelihood need to be for us to not consider it at least possible, even if not likely? I don’t know about you, but I don’t know enough to decide. I can’t know.


Clark has been beating the drum for the Bayesian, predictive mind as well. Of course, Bayes’ theorem is a bit counter-intuitive to begin with, right? I hadn’t really thought much about it until I started reading Hohwy and - frankly - I had no idea I was so shitty at probability. For instance, if a woman told me that she had a positive finding on a mammogram, and that the test detects breast cancer 80% of the time it is there, I would fear for her, never thinking that she actually has less than an 8% risk of actually having cancer. Even as I write it now, I feel as though I need to retrace my intellectual steps to assure that I am right, even though I know for certain that I am. The numbers just feel so unnatural.


Yet, Hohwy, Clark, Friston and others are telling us that our brains employ the same kind (or style?) of statistical analyses when perceiving and acting in our world. Predictive coding. Sounds a bit futuristic, doesn’t it?


In their view, not only are our brains not passive receivers of inputs, but they create our experiences primarily via top-down processes, not bottom-up. When I drive my car up to the end of my street, come to rest adjacent to the stop sign, and look to the left, my mind has already predicted what I will see. When I turn my attention to the right, my mind has already predicted what I will see there as well. Meanwhile, when I look to the left again and see a cyclist that I almost strike with my car - a cyclist that seems to have appeared from nowhere in particular, I am surprised - and relieved to see have seen him when I did. I wonder how it happened that I didn’t see them the first time, and - for a very brief moment - I wonder what else I may have missed. I think too about selective attention; that is, of course, until I am quickly distracted by my car stereo and my mind wanders elsewhere.


By the predictive coding account, higher levels in the brain (distant from various environmental sensors at our periphery) are making educated guesses about the barrage of sensory input that is coming into the brain and is very strategic about how much it processes that input data. Prediction errors report signal “surprises”, differences between predicted and actual sensory signals to higher levels in the brain. By such a process, the brain need only process the bits of information that it didn’t predict accurately, it can ignore all the sensory bits that it predicted or guessed right.


Kinda like, encoding video. Perhaps you’ve never had to put much thought into it, but when your kids and grandkids take a movie from a DVD and put it on the their computer, they are able to take over 4 gigabytes of information and reducing it 20% of the original size. The way that software engineers accomplish this is by using software that only keeps track of the changes between frames in the movie.


Imagine for a moment that you have a small 3x5 spiral notebook - you know, like the ones detective’s used in old films - imagine that you take that small detective’s notebook and you draw a thin circle on the far left margin of the first page, then (on each of the next 30 pages) you draw the same circle incrementally a little bit more to the right of the page until you reach the far right margin. Now, when you flip through the pages, you can see the circle or ball move from the left side of the page to the right, like an old flip book.


Now, imagine that you want to store that flip book animation digitally, by placing your pages one-by-one on a scanner - let’s say at 300 dpi, in monochrome - we are looking at an image that features black or white dots only. No color. No greys. Just blacks and whites. With these settings, a 3x5 piece of paper would require 169 kilobytes of storage space. Multiplied by 30 pages, that figure increases to 5000 kilobytes of information.

But what if we only record the differences between frames? Sure, we need to have an image to start with (consider it a prediction, of sorts), but what if we then only record the differences between the pages? Most of the image will remain white through all 30 pages, or frames; only a select few pixels will change from page to page as the circle moves across, switching a select few white pixels to black and the same scant number of black pixels to white. If we were to assume that the little ball were only half an inch in diameter, we only need to account for approximately 950 pixels that change from page to page. Now, instead of storing over 5000 kilobytes of information, we only need to store and process 169 kilobytes for the initial prediction and then an additional 0.11 kilobytes for the few changes over the next 29 pages - a staggering 97% reduction in the amount of information that needs to be processed in order to reveal the same flip book to the viewer. And the 169 kilobyte version would be indistinguishable from the 5000 kilobyte version.


Now, I grant you that my flip-book example is very rudimentary, but proponents of prediction error minimization argue that the brain works to encode information similarly. In doing so brains are able to process a great amount of information with spectacular efficiency. Sticking with vision as an example, it is argued that the brain uses a top-down method of first predicting (using Bayesian methodology) what we should see and then it plays a type-of match game, you know, like when we were kids, hoping that cards that we flipped over in succession would look alike. When the predictions coming from the top, are met with an expected signal traveling from bottom, the message from the bottom needn’t be processed any further - to do so would be a waste of valuable energy and resources.


For instance, when I am staring blankly at white wall, my mind need not entirely reconstruct what I see after each occasion that I blink - to do so would be a waste of energy. The brain simply predicts what sensory information and data will be traveling through the optic nerves when my eyes reopen, and the information needn’t be processed very heavily. Only when a prediction seems wrong, when a prediction error occurs (for instance if you placed a picture of a bluebird in my visual field mid-blink) - only then will such new and important information work it’s way toward the top - toward higher brain centers - until the ever-dynamic prediction manifests in a way that aligns with the incoming sensory input. And so this process continues ad infinitum, not just for our vision perception, but for each and every perception. Hearing. Smell. Touch. Pain. Perhaps Mick Thacker will have something to share with us in the future with regards to the latter. Maybe not. I can’t know.


And that is just the sensory stuff - Friston’s work on motor action is some serious next-level shit. I don’t even know where to begin, because it is so far over my head. So, so far. First, I tried to read a couple of articles, but the dude has a 30-page paper on action and behavior in a journal titled, “Biological Cybernetics.” Hell, I didn’t even know biological cybernetics was a “thing”. I have tried to listen to his lectures, hoping that I can make heads or tails of it, but it is daunting … and at the same time it is laughable. To think that I can just sit down and listen to a guy with a PhD in Neuroscience talk about the mathematics involved in how the brain makes efficient inferences via a variant of a free-energy principle, while (myself) only possessing a clinical doctorate in a subject primarily grounded in basic anatomy and kinesiology … I mean, seriously, what could I possibly expect? I need 3-4 years of bachelors studies in mathematics, in statistics, and probably in neuroscience too to discern if anything he says makes sense theoretically, and that is before even thinking about whether or not his ideas might actually be right, or accurate.


But then - the more neuroscience I had read, the more that the mind (even if embodied) is still understood as a mere-machine of sorts. Eventually, regardless of where we draw the lines between us and the world we are situated in, we still need to ask: What is consciousness then if we are able to reduce the machinations of our nervous systems to physics? How does subjective experience arise or emerge from a physical, material world? (what Chalmers has famously called the hard problem of consciousness) Is consciousness just an illusion? An evolutionary by-product? If it were an illusion, what does it mean for our own experiences? Our patient’s experiences? Our patient’s pain? What does it mean when I ask a patient to explain to me how and what they are feeling? What should I expect when I ask a patient to engage in a behavior that I think (or at least I think I think) may be beneficial to them?


If there is no longer any space or room for the notion of a libertarian free-will - you know the good ‘ole ability to pick ourselves up by our bootstraps and engage in any behavior as we see fit - what kind of will are we able to exercise, if any at all? Many in the hard sciences don’t think that we have a will at all. Of course, some philosophers, like Dennett, hold out hope for a free-will that is somehow still compatible with a material-view of the world - a free-will worth wanting, I think he calls it. But, I’ve heard him talk about the horrible consequences if people failed to believe in free-will, and I have wondered if much of his views are simply grounded in well disguised and ironically unintentional motivated reasoning. I know that I certainly feel motivated to find a glimmer of hope for the preservation of the idea of my free will.


That is, of course, when I am motivated to feel anything at all. Last year, I wrote a blog post about an experience that was too personal for me to publish at the time, although I did share it with a few, one or 2 folks who may be in San Diego. It was titled, ‘I've Taken My Self For Granted’. I wrote:


I am a thinker, a ponderer.


I am a stoic.


I am rationale.


I am patient.


I enjoy reading and learning.


I am loyal.


I love my wife with all my heart, and my children (somehow) even more.


I am me.


I was me. That is, until it became harder to think. Until I was overwhelmed with emotion, holding back tears daily. Until I was irritated by the mundane. Until I couldn’t concentrate enough to read. Until my wife’s idiosyncrasies brought on thoughts of separation. Until I started contemplating how much life is really worth living after all.


I used to be me, now I feel as though I am someone different.


Or was I someone else, a different person, and now I am me?


All iterations of me have lived under the premise that a person is not their shell alone. No, a person is not defined by their physical body; they can’t be. The body constantly turns over new cells. The physical structure of my body only resembles, in form, the body that I had years ago. No, I am my mind embodied.


But, if my mind - my thoughts and behaviors change - then what, if not my body, makes me me?


Is it my history, my memories? I have a recollection of my past, a timeline along which my self has theoretically traveled. Yet, I lose my memories as well. I don’t remember stories that I once did, and there are some memories: I am uncertain, did they ever really happen or are they only embellishments rewritten as memories? And if my history informs my self, but the memory of my history erodes, is my self eroded as well? Do I become less of who I once was? Or are there certain important memories that inform my self, while it is the unimportant memories that can erode without effect?


References are full of Phineas Gages who have suffered physical traumas and John Nashs, Syd Barretts, and Demi Lovatos who suffer from chemical “imbalances” (e.g. schizophrenia and bipolar disorder) that change and alter their selves. I read of athletes with CTE and work with elderly suffering from dementia and I am offered an example of how fragile the self is. I wonder how different they truly are from me. Do everyone’s selves change as often as the afflicted, only not as drastically? What if we are never ourselves in the way we think or feel we are? What if our selves are necessarily more fluid, more flexible than we perceive them to be?


I have learned a harsh lesson: I now recognize that my self is a subjective and qualitative thing that I have no control of, despite the illusion that I once did. Next week I will begin treatment for Graves Disease. Over the next few months, the biochemical reactions that make me me will be altered. I don’t know if I’ll be the person that I was, but I’m quite certain I won’t be the person I am. None of it will be my own doing. I’ll have no more control of who I become tomorrow than I do of who I was yesterday. I wonder if I’ll read this 6 months from now and question how I could have been so nihilistic.


Today, it doesn’t feel as if that could be possible, but I know that nothing should surprise me, either.

I know it reads as really dramatic now, but man … I’ve been depressed before, Barrett - but that particular episode was like nothing I had ever experienced. Nothing mattered. Not me. Not my wife. My kids were barely a blip on the radar. I was thinking about some pretty dark shit from a really dark place… and it completely destroyed the illusion that I was who or what I thought I was. I discovered that the self was more fluid than I ever thought it could be. My values shifted more dramatically than they ever had before. I lost purpose. It became clear to me that there wasn’t anything that made me ‘me’, no essence. And that is when - on the advice of a friend -  I started watching and reading about some of Deacon’s ideas. And maybe they are not his alone - I don’t know, I only know that it is through him that I have been exposed.

Bruce Hood says that the self is an illusion, and he might be right, in the context that we usually consider selves as nouns … things or people that we are. Deacon’s account of selves as processes is a different one than the average person may employ, though and (by proxy) so too is Sherman’s, whose book is a much easier read (although still challenging, to me anyway)

Materialist accounts of the world as we know it don’t account for values, likes, or motives. They don’t account for what it means to be a self. They don’t account for meaning. They don’t account for what Sherman calls striving. It isn’t physics that compels a patient to seek care for painful complaints. They are striving for something are they not? They strive to reduce, better control, or abolish their pain, right? And pain is information, no? What meaning does it have? For who?

The materialist accounts that I mentioned above predominantly observe humans through the lens of physics, not chemistry. But we – and all selves, as termed by Sherman – should necessarily be observed as biology: chemistry constrained in specific conditions and circumstances from which life emerges. In this regard, it is a categorical mistake to consider selves as mere fancy and probabilistic wet-ware machines with really neat environmental interfaces.

We strive. But we are not unique in this regard. All selves strive: organisms, beings, plants, bacterium, creatures big and small, all things biological. We make effort, working for our own benefit. When a computer does work, it is doing so for us; computers and machines don’t strive. Selves do.

Selves, according to Sherman, are unique in the physical world, because we strive to resist the 2nd law of thermodynamics (albeit, mostly unconsciously). The rest of the physical world is flowing toward entropy, toward disorder. Not us. Our bodies, our selves as it were, work to regenerate. Machines and computers are not selves, because they don’t regenerate. They are not striving. We are, but chemistry can’t explain how or why.


We make efforts for our own benefit in response to the circumstances we are in embedded in. Chemistry cannot explain that. Neuroscience can show correlates in the brain between inputs and perceptions, intentions and actions, but it will not be able explain striving because neuroscience studies computation. It studies the person as an embodied computing machine, not the person as striving self. Sherman asserts in a talk at Google that when you try to go from computation to qualia, you have made Chalmers hard problem the made-harder problem.


Sherman openly acknowledges that to use the term “striving self” is still a placeholder for something that science is yet to explain, it is not a scientific explanation in and of itself. It is an observation, an important observation.


We have wants, a will; but there is not yet a scientific explanation for how it is that we will, as obvious as it is to us that we have it. We have a will to live. All organisms do - they might not know or feel will like we do, but there is a striving to live and self-regenerate; even when I was at my lowest consciously, I didn’t cease to self-regenerate. No, I still did work to regenerate, to stay alive, even while I pondered if I should. Sherman argues that will started with chemistry that - by accident - was biased toward doing work against petering out, a bias to self-regenerate, a constraint that prevents degeneration.

In this way, to even consider the notion of free will is an oxymoron. Freedom, after all, is the idea that anything goes. But the will constrains. The (focused) will is proactive and affords the self to pursue its wants and priorities, the freedom to follow its biases in the direction of means-to-ends interpretation. Sherman says, You’re not choosing what to do free and independent of all outside circumstances; you’re willfully, attentively, and responsively interpreting outside circumstances, doing biased interpretive work for your own benefit … especially in humans given our use of language. Interpretation gone wild. In visionary, delusional, unpredictable ways.” In this manner of thinking, humans don’t arise to a level of consciousness that begets will; instead, will is endowed in all selves, as they interpret their environment and do self-directed effort for their own benefit.


Or at least I think that is what I think they are saying - I’m still trying to wrap my head around it all, to be honest. And still - I would need more education than I have time for - or that I could possibly afford - to begin to even think about offering a legitimately critical appraisal of Deacon and Sherman’s perspective. And even if I did, I wonder still about the consequences for our consideration of people with painful complaints.


In their narrative, the will is about interpretation. Interpretation is filtered through values and meaning for each self, clustered in a community nested in other communities, beset by the ubiquity of cultures embedded in cultures. Kinda makes the term ‘biopsychosocial’ sound trite, doesn’t it?


4 years ago, John Quintner said that I’d be no wiser - well, not me, he was engaging with Eric; I’m the one that took it personally, though - and he was kinda right. I’m certainly not any smarter. I don’t know more now than before. I am aware of more ideas, though. I do know more about what I don’t know and what I can never be certain of, and I do think there is wisdom in that.


Perhaps that is what I should try to talk about in San Diego. Although, if I were in their seats, I don’t know if I would find it all that interesting. I’m not even sure if I could sit through a talk that would try to hit on these ideas in a mere 45-50 minutes. Each idea alone would be worthy of its own keynote talk from a more qualified presenter, I think.


Perhaps - instead - I should share with them my premise - not that I think my premise should matter much to them. Do others really care if I aim to serve patients in pain with humility in the face of ever-increasing uncertainty? In an environment that might be surrounding a situated body with an embodied - and maybe even predictive -  mind? That each of us might possess just enough freedom to make a limited number of narrowed choices? That those choices might be made in a causal world of powers and dispositions? That the only advice I have to offer is that each of us should try to consider all that may be possible and then do our damndest to not be an accidental asshole after? I don’t know.


What I do know it that this thinking shit is a lot of work, Barrett. Isn’t it just easier to do some manual therapy? To push on this, or pull on that, a little friction here-and-there with some soft language and tones, then - as an add-on of sorts - consider how someone’s psycho- or sociological history may somehow tweak their experience? Does considering all this other shit make anyone a better therapist? If it does, I’ve yet to find the way. Paralysis by analysis is the phrase people use and I’ll be damned if it isn’t apropos here.


But as for the summit - I still don’t know what to say, but I’ll figure something out. I’ve got to, right? There are only 4 months left until I’m standing on that stage, trying my best to avoid coming across as an imposter.


I hope you are well, Barrett - I miss your daily writings, but am buoyed by the thought that you are still out there reading whatever interests you. And thanks again for all you’ve given to me over the years; I sincerely appreciate it and wouldn’t be where I am without you.


Your friend,



50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.