Collins & Pinch's The Golem at Large: What you should know about technology

This was my "questioning" for Collins & Pinch's The Golem at Large: What you should know about technology. This post is part of a series from my CMNS 857 Philosophy of Technology seminar with Andrew Feenberg. For more information, see this linked post.


For the first time, I know my question and don’t need to “type it out” to gain clarity on where my thoughts lie - but I can provide context.

Truly, seriously -- what good are theoretical models for the human sciences? (I purposefully am sketching a broad stroke although for the current context the social construction of technologies/science is assumed.) Why have and use theoretical frameworks to study the social? I am not convinced that the use of theory to predict outcomes in practice is worthwhile. It always seems that theories offer too many trap doors, too many “Yes, but...” or “All conditions being equal.” Why strive for theoretical models? To what end and for what purpose?

I felt familiar approaching the texts this week, especially considering the heading of social constructionism. But I was surprised (and still am, actually) at the resonance I felt with both the Bijker & Pinch piece as well as the Golem text. I have a series of hunches that I have developed over the years, untested assumptions informed from a great deal of interdisciplinary reading, that seemed to be codified by the bricolage of ideas shared in the name of social construction.

For example, there is my “Guy in the back of the room” hunch. Whenever there are natural disasters, or something along the lines of the Challenger or Chernobyl catastrophes, I have to believe (for my own sanity) that there were people who were “in the room” (metaphorically) who spoke up, raised their hand, or otherwise made their point well know that they knew something wasn’t right. In the Golem text this occurred as (in one iteration) the “risk assessment” - the reification of a probability that something will or will not be the case. Take the Challenger. There literally was a room, and there were folks who stood up and said - “Look, something’s not right here. Yes, my evidence isn’t as hard as our models would prefer, but something is up here.”

This leads me into my “x factor” hunch. I’ve slowly but surely named this phenomenon for those times when science/technology/positivism just doesn’t cut it. There exists a gap, between theory and practice, the observation and the evidence, the prediction and the outcome. And it cannot be explained by the model, the numbers, or the framework. But it usually occurs as a hunch - “Something’s not right here.” Without offending the victims of hurricanes and tsunamis -- If you live in an area next to an ocean, hurricanes and tsunamis happen. Don’t be surprised when they do, and don’t be angry with the government or the scientists or mother nature. God isn’t out to get you. You live next to an ocean, and oceans have hurricanes and big waves. But there is always the refrain that one day - one day - we will get so good at predicting weather and natural disasters and what not. Will we?

An iteration (the hunches are all actually - most likely - the same thing, spawned from the same set of core ideas that I haven’t explored philosophically yet) of the x factor hunch I recalled in the chapter on economics. Economic models are always on my radar because they seem to be encroaching on the management and assessment of schools and educating as a context. Many years ago in an effort to hunch down a source of the x factor I spent time with the analytic philosophers of language, Austin & Searle. I listened to lectures, read books, etc. Many of Searle’s recent models related to social ontology were intriguing - but what stuck with me primarily was a critique Searle offered of economists’ models. In the midst of the “global economic crisis” of the late 2000s, Searle approached his economist colleagues at Berkeley. “Why did this economic crisis happen?” Surely folks who get paid to develop models to explain (and predict??) the economies of countries and the entire world would have some answers to such a question. They had nothing. Not a clue.

Which brings me to a final point, I think. We think we can control, can identify, find, and enact control on a massive scale -- but we never quite seem to realize that we are really, really horrible at controlling based on theoretical models. I don’t think this is necessarily anti-science or techno-dystopia. But when it comes to things that matter - saving lives, establishing relationships, developing respectful citizens - we are not all that good at taking responsibility or being all that useful... considering the rhetoric surrounding the value of predictive social science theoretical frameworks.

Even as I compose this final paragraph I realize it does seem rather bleak -- but I do think we can make a difference. A big difference. And I don’t think we need theoretical models to help us do it.


Popular posts from this blog

Re-Imagining Online Teaching & Learning: A Cognitive Tools Approach

Call for Chapter Proposals: Teaching Heidegger