The Popperian Falsification Fallacy

Posted
Comments None

Popper’s approach to truth implies that one cannot state truth. Instead, theories have to be falsified. As a popular assumption, doing so frees one from making false assumptions about truth. But there’s a hidden fallacy in this approach that is not very obvious, and can have devastating consequences if it becomes unconscious and forgotten.

Definition of Truth

Many “early” C-programmers were forced to make assumptions about truth. The language did not have built-in Boolean functions. As a convention, people had to define the boolean constants themselves.
The usual approach goes like this:

#define False 0
#define True 1

or

#define False 0
#define True !False

Dichotomous Interpretation

This approach is a numerical mirror of Conan Doyle’s idiom that he put in Sherlock Holmes’ mouth: Once one has eliminated the obvious, the remainder, however improbable, must be the truth. It carries the implicit assumption that from “non false” “true” arises as an antonym. The problem of this assumption is an epistemological one: It assumes categorical, dichotomous definitions of true and false.

So, following Popperian falsification, dependent on the epistemological assumptions of the interpreter, different definitions of truth arise. A dichotomous thinker who is firm in the first-order (predicate) logic, may follow the above definition. If dichotomous values for true and false are defined, Boolean logic can be used, and along with it implication and converse implication. However, implication and converse implication require that all phenomena that enter the implication are known. This is untrue for any axioms that were derived by paradigmatic generalisation.

Linear Interpretation

Most modern science builds on the linear paradigm and uses factor analyses, linear regressions and analyses of variance. They do so, not to proof a hypothesis, but in reverse implication to reject the opposite hypothesis (null hypothesis). There is, however, a problem if your observed model only explains a moderate amount of the variance. The reverse of an implication cannot be made under this assumption. Truth turns into a probability. This probability is not only dependent on the generalisability of one’s sample. Even in a generalisable sample one may have observations that are contained in the unexplained variance.

As a consequence, one may not use predicate logic on the conclusions and acceptance of the hypothesis, as it is still probabilistic, not binary. However, this is what is seen frequently when people make arguments based on prior studies. Therefore, Thomas S. Kuhn described truth as a relative consensus of paradigms that shifts along with continued falsification, and never really ends. Only falsifications that one is aware of and are available in the current context may enter in the consensus, prior knowledge may also be lost or became blurred in the same way knowledge about binary non-reversability of probabilistic implications.

Lexical Interpretation

Language itself depends on epistemological assumptions. The term truth, for example, may be defined as categorical, probabilistic, ontological, or experiential by different interpreters. Lexical studies, for example, studies regarding OCEAN trait based on the International Personality Item Pool (IPIP), use simple phrases. But even these simple phrases may be interpreted differently by speakers under varying epistemological assumptions. One speaker may interpret “I accept what others say” as “I’m open to new experience”. Others may interpret it as “I believe in others suggestions too easily”. Or they may conclude “If others convince me, then I’m willing to accept.”, i.e. that they are reasonable. If one measures such an item and reduces it to a score on a Likert scale, these differences in interpretation, that stem from epistemological and situational assumptions, are lost.

This implies: The causative chain that led to this behaviour is overgeneralized and lost in the process. This generalisation weakens the possibility to converse implication, and thus to draw conclusions from the rejections of the null hypothesis.

In other words: This kind of falsification does not lead to causative truth, but boils down to the following:

There is a group of people in the samle, whos utterances appear to cluster under the hypothesis. However, smaller clusters were ignored and even within this cluster, homonymity may artificially increase the clusters by epistemological bias of the sample, the instrument, and the interpreter.

Hierarchical epistemology appears to be the least common denominator present within people, however sohpisticated their current way of thinking may have evolved. Tt forms very early in thinking and is not always transcended, even by people in positions of power. In a loop-back, people with this style of thinking appear to dominate the pursuit of positions of power. As a consequence, this type of thinking appears artificially over-emphasized by lexical analyses of personalities and societies, in which authority plays a significant normative role.

Hiding Assumptions About Truth in the Method

The hidden assumption, that Popper makes, is now in the method. Defining what is false carries an implicit assumption of what is true, defining true in the process, even if this assumption of truth is now obfuscated in the method. If false and true are interpreted as binary and not probabilistic afterwards, one draws wrong conclusions. Once the realm of probabilities is entered, it is impossible to leave it. The only science that can get away with this approach at induction is math: it can do a full implication because by definition it knows the completeness of the domain the proof is built on, described as a set. Such completeness does never exist in nature. However, this reduction back to categorical true and false may arise from ritualistic science, where people fear statistics for the most part, and follow a method that is applied by many others, without taking the necessary care when conventionally drawing conclusions.

Even less does it exist in lexical analyses. Words always stand for explanatory principles. Any word can be challanged by increasing the resolution with which one investigates the phenomenon the word has been derived from. As French philosopher Derrida stated: There is no absolute meaning in any word. It is always perceived in difference to the meaning that is already present in the reader, and thus subject to epistemological relativity. He called this process diffĂ©rance. Of course, he has faced harsh criticism. People don’t like to hear that. Binary assumptions are much more easy to make. That doesn’t make them more true.

Author
Categories ,

Comments

Commenting is closed for this article.

← Older Newer →