Category Archives: Classical Logic

Classical Logic IV – First-Order Logic

(This is going to be a long one, but bear with me. Once we make it through this, I get to start talking about the cool stuff.)

4.1. Quantification

If you’ve been doing the questions for Parts II and III you may have run into some interesting quirks with symbolizing one of the arguments from Part I. Specifically:

  • P1) All men are mortal.
  • P2) Socrates is a man.
  • C) Socrates is mortal.

Now, with a little bit of reflection, it’s clear that this argument ought to be valid. However, if we try to symbolize it using what you’ve already learned, we end up with something like:

  • P1) R
  • P2) S
  • C) M

Which, it’s pretty clear, is not going to be a valid argument. The reason it doesn’t end up working out is because the basic logical language we’ve learned (known as Sentential or Propositional Logic) has no way of formalizing the concept of “all”.

That’s why there’s a second logical language that we’ll be talking about called First-Order Logic (or Predicate Logic). It is a strict extension of Propositional Logic, which is to say that everything you’ve already learned will still apply. (Well… you won’t be doing truth tables any more, but you won’t really miss them, I promise). In our extension we introduce two new concepts to our language: “some” and “all”. Because these describe quantities, they are called “quantifiers”.

4.1.1. Predicates

Before we learn about quantifiers, however, we need to discuss something called predicates. While they do have a fancy formal definition, I’d like to leave that for a future discussion and just cover them in an intuitive sense.

A predicate is like an adjective, in that it describes something about a subject. For example, in the sentence “a baseball is round”, the predicate “is round” describes a particular subject (in this case “a baseball”). Because we have separated the two things out, we can then go on to apply the predicate to other round things: “the Earth is round”, “circles are round” and “soccer balls are round” will, instead of each being their own sentence, be represented as a single predicate applied to three different round subjects.

Symbolically, we tend to denote predicates by uppercase letters, and subjects as lowercase letters. So in the examples above, we could let R represent “is round” and then our four sentences become Rb, Re, Rc and Rs (where b,e,c,s represent “baseball”, “Earth”, “circles” and “soccer balls” respectively).

Typically however, we reserve letters from the beginning of the alphabet (a,b,c,etc…) when we talk about particular subjects (called constants), and letters from the end of the alphabet (x,y,z,,etc…) when we talk about generic subjects (called variables).

4.1.2. Universal Quantification

So now that we have predicates, we can apply our first quantifier to it: the universal quantifier: \forall. This symbol indicates that every subject has a particular property. In order to use it in a sentence, we must bind it to a variable such as x. Once we’ve done this, we can create our sentence: \forall xRx or “everything is round”.

Now, this isn’t particularly helpful, since any useful predicate will be true of some things and not of others. That’s why we’re able to quantify over not just predicates, but other sentences. So, a more useful sentence might be \forall x(Cx\rightarrow Rx) which could be read as “anything which is a circle is also round”. In fact, we can even place a quantifier in front of a quantified sentence to end up with something like \forall x\forall y((Cx\vee Cy)\rightarrow (Rx\vee Ry)) (which is a redundant sentence, but one that makes my point). Typically, we will abbreviate \forall x\forall y\ldots as \forall x,y,\ldots.

In fact, this is the general form that most universal quantifications will take. For all x if x has some sort of property then x has some other sort of property. To give another example, to say “all men are mortal” we would translate that as “all things that are men are mortal” or \forall x(Nx\rightarrow Tx).

4.1.3. Existential Quantification

The other type of quantifier we can apply to a sentence is called an existential quantifier and is used to denote that some (or at least one) subject has a particular property. The symbol to denote this is \exists and is used in a sentence exactly the same way as a universal quantifier: \exists xRx (read as “something is round”).

Again, this might not seem entirely useful, because it tells us nothing about what sorts of things the predicate applies to (in this case, what sorts of things are round?). But we can do another trick with them to narrow what we’re looking at by using conjunction. Take the predicate Px to mean “x is a plate$ then we have \exists x(Px\wedge Rx) or “some plates are round”.

4.1.4. Relations

Relations are similar to predicates, except that they use more than one variable. For example, if you wish to say “James is the father of Harry”, we could take the predicate F to describe a relationship between James (j) and Harry (h). Symbolically we would write this as Fjh.

We aren’t just limited to two-place relations either. The predicate Mphw could represent the sentence “the priest married the husband and wife”. Examples with bigger and bigger relations do become more and more contrived, but technically speaking, there’s nothing to stop us.

We can also quantify over just part of a sentence. For example \exists xFxh would say “someone is Harry’s father” and \exists xFjx would say “James is the father of someone”.

Additionally, for two-place relations, they are sometimes represented with something called “infix notation”. Here, instead of saying Rxy we would write xRy for whatever relation R we’re dealing with. I prefer not to use this notation as it doesn’t work for relations with three or more subjects. With one notable exception:

4.1.4.1. Identity

We have the option of adding a special relation to our language, called identity (=). It is a two-place relation written with infix notation that indicates that two things are the same. If we know that these two things are the same, then we can substitute one for the other in sentences. So the following sentence is valid (ie: always true) for any predicate P:

\forall x,y(x=y\rightarrow (Px\leftrightarrow Py))

Or, stated plainly, if two things are the same then any property that holds for one will also hold for the other.

I prefer to deal with first-order logic that makes use of identity, and so that is what I shall do. You may ask why we would choose not to do so, and the answer to that is basically just that it can make proving things about first-order logic a bit trickier. But that’s outside of the scope we’re dealing with here today.

Identity also has the following properties:

  • Reflexivity: \forall x(x=x)
  • Symmetry: \forall x,y(x=y\rightarrow y=x)
  • Transitivity: \forall x,y,z((x=y\wedge y=z)\rightarrow x=z)

Reflexivity tells us that everything is equal to itself. Symmetry tells us that it doesn’t matter which way we order our terms. Transitivity tells us that if one thing is equal to another, and the second thing is equal to some third, then the first and third things are also equal. These are the three properties that give us something called an equivalence relation, but we’ll get to what that is when we talk about set theory.

4.2. Socrates

With all of that under our belts, let’s look back at the original question. The argument:

  • P1) All men are mortal.
  • P2) Socrates is a man.
  • C) Socrates is mortal.

Can now easily be symbolized as:

  • P1) \forall x(Nx\rightarrow Tx)
  • P2) Ns
  • C) Ts

Easy enough, right? However this still doesn’t give us a way to tell whether the argument is valid. We can’t use a truth table, because there’s simply no way to express quantification in a truth table.  (Well technically we could, but only if we had a finite domain and even then it’s a pain in the ass…)

So we are still in need of a way to tell whether arguments in first-order logic are valid. Which brings us to…

4.2. Fitch-Style Calculus

There are a number of different ways to determine validity in first-order logic. I’m going to be talking about one called Fitch-Style Calculus, but there are many more including Natural Deduction (which I won’t be talking about) and Sequent Calculus and Semantic Tableaux (which I probably will).

I personally like Fitch because I find it very intuitive. A basic Fitch proof involves taking your premises and deriving a conclusion. We no longer deal with truth values, but instead we consider all of our assumptions to be true, but in a particular context. Just as an argument has two parts: premises and conclusions; so do Fitch proofs. These two parts are delimited by a giant version of the \vdash symbol. Premises (or assumptions) go on top of the symbol and conclusions go underneath. Additionally, each new line in your derivation must also be justified by a rule, which we’ll get to.

The most important concept in such derivations is that of the sub-proof. Basically a sub-proof is a brief argument-within-an-argument with its own assumptions which can be used within the sub-proof, but not in the outside proof as a whole. We will also make use of the term “scope” which basically refers to how many sub-proofs deep we are in a current argument. The rules (mentioned above) include ways to move sentences between different scopes.

I know this all sounds confusing, but we’re going to look at these rules and see whether they make sense. Before we do, I want to show you an example of a (admittedly complicated) proof, so that you can see what it looks like. You can check it out here. Take particular notice of the sub-proofs (and the sub-proofs within sub-proofs) to get an idea of how this sort of thing gets laid out. Also notice that each line is justified either as an assumption (appearing above the horizontal line of the \vdash symbol and abbreviated “Ass.” (try not to laugh)) or justified by a rule and the lines which the rule is making use of. Finally, notice that the last line is outside of any scope lines. We’ll see what this means later on.

But onto the rules! Once again, there are two different kinds: rules of inference and rewriting rules.

4.2.1. Fitch-Style Rules of Inference

Rules of inference are used to either introduce or eliminate a connective or quantifier from a sentence, or move sentences between scopes. They may require one or several lines earlier in the proof in order to make use of them. In the examples below, the line with the rule’s name next to it represents an application of that rule. Additionally, for these rules we will be using Greek letters to represent sentences which may not necessarily be atomic. This is because these rules are general forms which can be applied to very complex sentences. Thus, the symbol \varphi may stand for A, Pa, Pa\wedge Pb or even something as complicated as \forall x,y,z((Rxy\wedge Rxz)\rightarrow Ryz).

Each logical symbol in our language will have both an introduction rule (how to get that symbol to appear in a sentence) and an elimination rule (how to get it out). Thus, we have two rules each for \wedge,\rightarrow,\vee,\neg,\leftrightarrow,\bot,\exists,\forall,= plus one more, for a total of 19 rules.

4.2.1.1 Reiteration

The first rule of inference is kind of a silly one. Quite frankly, it almost always gets left out of written proofs (in fact, it’s been left out of my example proof above). Basically it states that any line of a proof can be repeated (reiterated) at any line within the same scope or deeper. In a proof, it looks something like:

Where basically you have some sentence \varphi and later on in the proof you repeat it. Notice that we are justifying the second appearance of it with the name of the rule: “Reit.” This is important to do in every line of your proof. Normally (as seen in my example proof) we also need to justify what lines the rule is being derived from. Because we have no line numbers here, they are omitted. There is a second (more useful, but still generally omitted) version of this rule which allows you to move a sentence into a sub-proof:

Notice however that although we can move sentences deeper into sub-proofs, we can’t take a sentence in a sub-proof and move it back outside of the sub-proof. In order to get things out of sub-proofs, we need different rules, which we’ll see shortly.

4.2.1.2. Conjunction Introduction and Elimination

These two rules are fairly straightforward. First, and-introduction:

This rule tells us that if we have two sentences, then we can simply “and” them together. Think of it this way: since we have \varphi and we have \psi then we clearly have \varphi\wedge\psi too.

And-elimination is simply the same thing, backwards:

If we have a conjunction, then we can derive either of the two terms. Put another way, if we have \varphi\wedge\psi then we clearly must have \varphi and also \psi.

4.2.1.3. Implication-Introduction and Elimination

Implication-introduction is the first time we will see the use of a sub-proof in one of our rules. It has the form:

This deserves a bit of explanation, but is pretty easy to understand once you see what’s going on. In our sub-proof, we are assuming \varphi. Now, it doesn’t actually matter whether \varphi is true or not, but for the sake of argument we are assuming that it is. Once we’ve done so, if we can somehow further derive \psi then we are able to end the sub-proof (also called discharging our assumption) then we can conclude \varphi\rightarrow\psi, which is just the formal way of saying “if \varphi then $psi$”, or “if we assume \varphi then we can conclude $\psi$”.

Implication elimination is identical to the Modus Ponens rule we learned in Part III. In fact, in many proofs (including my example proof above) we abbreviate the rule “MP” instead of “\rightarrowE”. To remind you, this rule has the following form:

As you can see, this is completely identical to what you learned about Modus Ponens earlier.

4.2.1.4. Disjunction Introduction and Elimination

Or-introduction is also fairly straightforward and takes the form:

Which is to say, if we have \varphi then we can loosen this to \varphi\vee\rho for any arbitrary sentence \rho. Since disjunctions are also commutative (remember from Part III), we can also infer \rho\vee\varphi.

Or-elimination, on the other hand, is considerably more complex:


All this says, though is that if you have a disjunction and can derive the same sentence (\rho) from either disjunct, then regardless of whichever between \varphi or \psi is true, \rho must also hold. It is also important to notice that we are using two different sub-proofs. The sub-proof where we assume \varphi is completely separated from the one where we assume \phi. So, for example, we couldn’t use the reiteration rule to bring \varphi into the second sub-proof because this would involve reiterating it to a less deep scope level. It can be a bit of a tricky one to wrap your head around, but once you’ve played around with it a bit, it begins to make sense.

4.2.1.5. Negation Introduction and Elimination

Negation introduction and elimination have exactly the same form. The only difference between the two is the location of the \neg symbol:

In fact, we technically don’t even need both of these rules. Instead we could forgo the elimination rule and simply use the double negation rule (see below). We would do this by assuming \neg\varphi, deriving \bot, concluding \neg\neg\varphi using \negI and then rewriting it to \varphi using the double negation rule. If you understand what I just said, you’re doing great so far!

4.2.1.6. Biconditional Introduction and Elimination

These two rules basically just formalize the notion that a biconditional is literally a conditional that goes both ways. Its introduction rule has the form:

Where, if two sentences imply each other, then they mutually imply each other. The elimination rule has the form:

Where, if two things mutually imply each other, then they each imply the other on their own.

4.2.1.7. Bottom Introduction and Elimination

Recall that the \bot symbol is a logical constant which represents a sentence which is always false. Basically, its only use is to enable the negation introduction and elimination rules. You might wonder how we could possibly derive a sentence which is always false when we’re assuming that our sentences are always true. Well that’s kind of the point. Observe:

The only way to actually derive \bot is to derive a contradiction. This will usually only happen in the context of some sub-proof which allows us to use the negation introduction and elimination rules that say basically “if your assumption leads to a contradiction, then your assumption is incorrect”. This is also a form of argument which is known as “reductio ad absurdum” where you grant an opponent’s premise to show that it is logically inconsistent with some other point.

The bottom elimination rule does exist, but is seldom used. It has the form:

All this does is return us to the notion that anything is derivable from a contradiction. Regardless of what \rho actually is, if we already have a contradiction, we can infer it. It’s not like we can make it more inconsistent!

4.2.1.8. Substitutions

Once we start involving quantifiers in our sentences, our rules start getting a bit muckier, because we have to deal not only with sentences containing predicates, but also terms. Because of this, we need to have something called a substitution, which I’ll explain briefly.

As the name suggests, a substitution takes one term out of a sentence and replaces it with a completely different term. This is written as \varphi[a/b] where \varphi is some arbitrary sentence, b is the term we are replacing, and a is what we are replacing it with. Here are some examples:

  • Pa[b/a] evaluates to Pb
  • (Pa\wedge Qa)[b/a] evaluates to Pb\wedge Qb
  • Pa[b/a]\wedge Qa evaluates to Pb\wedge Qa
  • (Pa\wedge Qb)[b/a] evaluates to Pb\wedge Qb
  • \forall x (Px\rightarrow Rxa)[b/a] evaluates to \forall x(Px\rightarrow Rxb)

We may sometimes wish to talk about what terms appear in a sentence. I will use the notation a\in\varphi to mean that the term a appears in the sentence \varphi and the notation a\not\in\varphi to mean that it does not. With that in mind, let’s move on.

4.2.1.9. Existential Introduction and Elimination

Existential introduction is a way to weaken a particular statement. It takes the form:

At any given time, we can take a concrete statement and make it abstract. To give a particular example, if we had the claim “the Earth is round” we could weaken it to say “something is round”. Which, while less helpful, is still true. We must, however, take care that the variable we are attaching to our quantifier does not already appear in our sentence. For example, if one had the sentence \exists x(Px\wedge Qa), we could derive \exists y\exists x(Px\wedge Qy) but not \exists x\exists x(Px\wedge Qx), because x already appears in our formula.

Existential elimination is a bit more straightforward, but also has a bit more bookkeeping involved with it:

If we know that there is some x for which \varphi is true, then we can give it a concrete name. The only thing we have to be careful about is what name we give it. In general, we can’t give it a name that is already assigned to an item. So if we had \exists x\varphi and \exists x\psi then we could derive \varphi[a/x]. Once we’ve done so, however, we can no longer derive \psi[a/x] but we can derive \psi[b/x] (assuming b has not already been used somewhere else within the same scope). A constant term that has not yet been used is called “fresh”.

4.2.1.10. Universal Introduction and Elimination

Universal introduction is weird, to the point where I don’t like to use it because it’s confusing. In the example proof I linked to above, I actually avoid using it and instead use a combination of negation elimination and a version of  DeMorgan’s laws (see below). But for the sake of completeness, here it is:

Once again, we have to make sure that the variable we are binding to our quantifier doesn’t appear in the rest of our sentence. But there’s another catch to this rule: in order to use it, you must be outside of any scope lines on your proof. In my example proof above, I could have used this rule to make another line at the end of the proof, but at no other point in my proof would this rule have been viable.

Universal elimination is much more straightforward:


We don’t have to worry about fresh variables or scope lines or anything else. Because \varphi is true for all x we can just substitute whatever the hell we want in for x. Easy.

4.2.1.11. Identity Introduction and Elimination

Our last (yay!) rule of inference involves the identity relation. Remember that this is a relation between two terms that indicates that they refer to the same entity. The introduction rule is as follows:

… wait, that’s it? Yep. Without having anything in advance, you can always deduce that any arbitrary term is identical to itself. No baggage required. The elimination rule is a bit more complicated, but really not by much:

If two things are identical, then they can be substituted for each other in any arbitrary sentence (\varphi).

4.2.2. Fitch-Rewriting Rules

The following rules allow you to rewrite sentences (or parts of sentences) in Fitch-style derivations. Actually, these rules will pretty much apply to any system of first-order logic. Once again, if you replace the \Leftrightarrow with a biconditional \leftrightarrow then the resulting sentence is a tautology and will always be valid. Next time, I’ll even teach you how to prove this.

4.2.2.1. Commutativity

As we saw in Part III conjunction, disjunction and biconditionals are all commutative. Thus we have the following rewriting rules:

\varphi\wedge\psi\Leftrightarrow\psi\wedge\varphi
\varphi\vee\psi\Leftrightarrow\psi\vee\varphi
\varphi\leftrightarrow\psi\Leftrightarrow\psi\leftrightarrow\varphi

Additionally, the variables in a repeated existential or universal quantification are also commutative. This gives us:

\exists x\exists y\varphi\Leftrightarrow\exists y\exists x\varphi
\forall x\forall y\varphi\Leftrightarrow\forall y\forall x\varphi

It is important to note, however, that the following are not rewriting rules:

\forall x\exists y\varphi\not\Leftrightarrow\exists y\forall x\varphi
\exists x\forall y\varphi\not\Leftrightarrow\forall y\exists x\varphi

There’s an enormous difference between saying “everyone loves someone” and “someone loves everyone”, which is why the order of terms can only be arranged if they are under the same quantifier.

4.2.2.2. Associativity

Again, as we saw in Part III conjunction, disjunction and biconditionals are associative, meaning that it doesn’t matter what order you evaluate them in:

\varphi\wedge(\psi\wedge\rho)\Leftrightarrow(\varphi\wedge\psi)\wedge\rho
\varphi\vee(\psi\vee\rho)\Leftrightarrow(\varphi\vee\psi)\vee\rho
\varphi\leftrightarrow(\psi\leftrightarrow\rho)\Leftrightarrow(\varphi\leftrightarrow\psi)\leftrightarrow\rho

There’s no real difference here between propositional and predicate logics.

4.2.2.3. DeMorgan’s Laws

Also as seen in Part III, we have DeMorgan’s laws which identify the relationship between conjunction, disjunction and negation and allow us to define them in terms of each other.

\neg(\varphi\wedge\psi)\Leftrightarrow(\neg\varphi\vee\neg\psi)
\neg(\varphi\vee\psi)\Leftrightarrow(\neg\varphi\wedge\neg\psi)

However, we also have a variation DeMorgan’s Laws which operate on quantifiers:

\exists x\varphi\Leftrightarrow\neg\forall x\neg\varphi
\forall x\varphi\Leftrightarrow\neg\exists x\neg\varphi

Seriously think about these until they make sense to you. It may help to think of an existential statement as a giant disjunction (ie: “something is round” means “either electrons are round or the Earth is round or squares are round or birds are round…”) and a universal statement as a giant conjunction (ie: “electrons have mass and the Earth has mass and squares have mass and birds have mass”).

4.2.2.4. Double Negation

Double negation is another one that’s straight out of last time:

\neg\neg\varphi\Leftrightarrow\varphi

If \varphi isn’t not true, then it must be true. Simple.

4.2.2.5. Definition of \rightarrow

Once again, \rightarrow can be defined in terms of \vee and \neg as follows:

\varphi\rightarrow\psi\Leftrightarrow\neg\varphi\vee\psi

Make sure you understand why this is true as well.

4.2.2.6. Symmetry and Transitivity of Identity

Finally, we have a couple of rules regarding identity. The first is easy enough:

a=b\Leftrightarrow b=a

You might ask yourself why this doesn’t fall under commutativity and the answer is a little bit confusing. Commutativity is a rule which applies between sentences. It says that you can swap the order of the sentences on either side of certain connectives. Symmetry is slightly different. It says that you can swap the order of the terms on either side of certain relations. It’s a bit of a technical distinction, but it’s one that’s important to maintain.

Transitivity of identity isn’t really a re-writing rule, but I like to put it here with symmetry because they go well together. Basically it states that if you know that a=b and also that b=c then you can also conclude that a=c. It can be made into a rewriting rule by:

\exists x(a=x\wedge x=c)\Leftrightarrow a=c

4.3. Future and Questions

It turns out that there’s actually a lot that needs to be said about first-order logic. As such, I’m going to split it into two (well, two-and-a-half) different entries. Next time I’ll talk about proving validity using Fitch-style systems as well as some strategies for proving things in first-order logic and translations of more complicated sentences. I also want to go over my example derivation and explain step-by-step what’s going on (in fact, I’ll probably do that part first).

In the meantime, here is a handy reference for all of the rules we went over today. See if you can answer the following questions:

  1. Translate the following sentences into first-order logic:
    • All circles are round.
    • Some buildings are tall.
    • Phil said hello to everyone at the market.
    • Joe knows a guy who can get you cheap concert tickets.
    • She is her mother’s daughter.
    • She got married to someone who owned a farm.
    • John is taller than Suzie, but shorter than Pete. Frank is taller than all of them.
    • There is at least one apple in that basket.
    • There are exactly two apples in that basket.
    • There are no more than three apples in that basket.
  2. Let Tx be the sentence “x owns a tractor”. Translate the following sentences:
    • \exists xTx
    • \forall xTx
    • \neg\exists xTx
    • \neg\forall xTx
    • \exists x\neg Tx
    • \forall x\neg Tx
    • \neg\exists x\neg Tx
    • \neg\forall x\neg Tx
  3. Determine whether the following relations are reflexive, symmetric and/or transitive:
    • “is the father of”
    • “is related to”
    • “is an ancestor of”
    • “is the brother of”
    • “is the sibling of”
    • “is descended from”
  4. Without using any rewriting rules, use Fitch-style derivations to prove the tautological form of each of the rewriting rules (ie: replace “\Leftrightarrow” with “\leftrightarrow“.
  5. Use Fitch-style derivations to prove the following arguments:
    • Pa\vee Pb,\neg Pb\vee Pc\vdash Pa\vee Pc
    • Pa\vee(Pb\wedge Pc),\neg Pb\vee\neg Pc\vee Pd\vdash Pa\vee Pd
    • a=b,a=c\vdash b=c
    • \neg\forall xPx\vdash\neg\forall x(Px\wedge Qx)
    • \exists xPx,\forall y(Py\rightarrow Qy)\vdash\exists zQz
    • \forall x((Px\wedge Qx)\rightarrow Rx),\forall x((Q x\vee Rx)\rightarrow Sx),\exists x(Px\wedge Qx)\vdash\exists xSx

Classical Logic III – Rules of Inference

3.1. Validity and Truth Tables

So it’s all well and good to be able to symbolize the sentences in an argument, but how does that help us when we’re trying to determine validity? Well let’s look at two related concepts that we’ve just learned and see if that can help us.

Remember the truth table for \rightarrow? It’s okay if you don’t, I’ll give it to you here:

A B A\rightarrow B
T T T
T F F
F T T
F F T

So basically, an implication is false only when the antecedent is true and the consequent is false.

Does this sound familiar? What was our definition of validity again? If it is impossible for the premises to be true, and the conclusion false, then the argument is valid. Can you see the connection? True implications have get true consequents from true antecedents and valid arguments get true conclusions from true premises. So how can we use truth tables to prove an argument is valid? Consider an example:

The argument:

  • If it rains, then the grass will be wet.
  • It is raining.
  • Therefore, the grass is wet.

Can be symbolized as:

  • P1) R\rightarrow W
  • P2) R
  • C) W

But consider the sentence: ((R\rightarrow W)\wedge R)\rightarrow W where what we’ve done is taken the conjunction of the argument’s premises and then stated that they imply the argument’s conclusion. Let’s examine the truth table for this sentence:

R W R\rightarrow W (R\rightarrow W)\wedge R ((R\rightarrow W)\wedge R)\rightarrow W
T T T T T
T F F F T
F T T F T
F F T F T

Notice that, regardless of the truth values of R and W, our final sentence turns out to always be true. But consider what that sentence means: that the premises of our argument imply our conclusion. Since the sentence is always true, this means that the premises necessarily imply the conclusion. And this is simply a definition of validity.

So, in general, if we want to determine whether an argument is valid, we can use this method of creating a sentence by making a conjunction of our premises, and using that as the antecedent in an implication, with the conclusion as the consequent. Then simply make a truth table for that sentence, and if it holds in all possible truth assignments, we know we have a valid argument. Neat!

Let’s quickly look at an invalid argument, just so that you can see the difference. This sort of example is called a non-example and is often useful for visualizing what you’ve just learned. Consider the argument:

  • P3) If it rains, the grass will be wet.
  • P4) The grass is wet.
  • C2) Therefore, it is raining.

As we discussed in Part I, this argument is invalid. So let’s symbolize the whole argument as ((R\rightarrow W)\wedge W)\rightarrow R and examine the resulting truth table.

R W R\rightarrow W (R\rightarrow W)\wedge W ((R\rightarrow W)\wedge W)\rightarrow R
T T T T T
T F F F T
F T T T F
F F T F T

Uh-oh… It looks like when R is false and W is true, the whole argument falls apart. Consider what this means: it is not raining, but the grass is still wet. This was exactly the reasoning we gave in Part I for why this was invalid. Neat how that works out. So given that, a valid argument must always have its resulting implication evaluate to true.

3.2. Rules of Inference

So we now know that some arguments are valid, some are invalid, and we can determine which are which by using truth tables. But the problem with deciding which arguments are valid that is you need to what their conclusions are ahead of time. This is not how critical thinking works. Usually we just want to start out with a few premises and see where these get us, while preserving the validity of what you come up with. This process is called inference or deduction, and can use a number of rules in order to pull it off. Let’s take a look at some of them:

3.2.1. Modus Ponens

Modus Ponens (or MP) is one of the most common tools in deduction, and says the following. If you have premises of the form $\latex A\rightarrow B$ and A then you are able to conclude B. Visually, we write this as:

A\rightarrow B,A\vdash B

Where the \vdash symbol is read as “entails” and means that if all the sentences on the left are true, then so is the sentence on the right. Or, put another way, if we know the sentences on the left, then we are able to deduce the sentence on the right.

This isn’t just some crazy thing I made up either: you can check whether it’s true or not… by using a truth table!

A B A\rightarrow B (A\rightarrow B)\wedge A ((A\rightarrow B)\wedge A)\rightarrow B
T T T T T
T F F F T
F T T F T
F F T F T

The right-hand column is always true, so MP must be a valid form of argument! In fact, if you look closely, this is the exact same table from our example of a valid argument above: you’ve already been using MP and you didn’t even know it!

It’s good practice to try making these tables, so I’m going to leave them out for the rest of this section. But don’t let that fool you: you should absolutely try to make them yourself. It’s important to be able to do this kind of stuff if you have any interest in logic.

3.2.2 Modus Tollens

Modus Tollens (MT) is kind of the reverse of MP in that it takes an implication and the truth value of the consequent in order to infer the truth value of the antecedent. Its logical form is:

A\rightarrow B,\neg B\vdash\neg A

Put another way, the only time that an implication can have a false consequent is when the antecedent is also false. Thus, we can infer that this is the case. Try doing the truth table for it. Seriously.

3.3.3. Disjunctive Syllogism

Disjunctive Syllogism (DS) says that, given a disjunction, if one of the disjuncts is false, then the other one must be true. This technically has two forms:

A\vee B,\neg B\vdash A
A\vee B\neg A\vdash B

This one can be tricky to visualize, but if you do the truth table it becomes fairly obvious as to why this is valid. Doooo iiiitttt!

3.3.4. “And” Elimination

“And” Elimination (\wedgeE) is a rule that states that if a conjunction is true, then so are its conjuncts. It also technically has two forms:

A\wedge B\vdash A
A\wedge B\vdash B

There’s not much to say about this one: if two things are true together, then obviously both of them are true on their own.

3.3.5. “And” Introduction

Similarly, if two things are true on their own, then both of them will be true together. This is what “And” Introduction (\wedgeI) states.

A,B\vdash A\wedge B

3.3.6. “Or” Elimination

Okay, this one’s a bit weird. Bear with me. First I’m going to give you its logical form:

A\vee B,A\rightarrow C,B\rightarrow C\vdash C

“Wait, where did this C business come from?” I hear you asking. Well if all we have to work from in an “Or” Elimination (\veeE) is a single disjunction, then there’s not really any way of figuring out which of the two disjuncts are true. BUT! If we know that C (whatever it happens to be) is implied by both A and B then regardless of which one is actually true, C will hold either way.

It’s confusing, I know. That’s why you’re supposed to be doing the truth tables for these. Honestly, they make sense once you do.

3.3.7. “Or” Introduction

Not nearly as complicated is “Or” Introduction (\veeI). It has the forms:

A\vdash A\vee C
A\vdash C\vee A

Again, C springs out of nowhere, but it kind of makes sense this time. If we know that A is true, then we can attach whatever we want to it without making a difference, so long as we’re saying that what we’re attaching is either true, or A is (which it is).

3.3.8. “Iff” Elimination

If you’ll recall from Part II, “iff” is a short hand for “if and only if” and is used to describe the connective \leftrightarrow. Thus, “Iff” Elimination (\leftrightarrowE) is:

A\leftrightarrow B\vdash A\rightarrow B
A\leftrightarrow B\vdash B\rightarrow A

Which basically formalizes the idea that a biconditional indicates that both sides imply eachother.

3.3.9. “Iff” Introduction

“Iff” Introduction ($\leftrightarrow$I) is just the reverse of \leftrightarrowE and states:

A\rightarrow B,B\rightarrow A\vdash A\leftrightarrow B

Plainly, if two sentences imply each other, then they are equivalent.

3.3.10. Tautology

A tautology is a sentence which is always true. The canonical example is A\vee\neg A, since no matter what, exactly one of those will always be true (making the whole thing true). Other ones include A\rightarrow A or many more, which we’ll see below. But the neat thing about tautologies is that they can be inferred from nothing:

\vdash A\vee\neg A

Remember that this means that if everything on the left is true (which it always will be, trivially) then the sentence on the right will be true (which it always will be, again)

This actually gives us a new notion of validity for sentences instead of arguments. A sentence is valid if and only if it is true under all truth value assignments. This also means that an argument can be said to be valid if and only if its logical form is valid.

3.3.11. Reductio ad Absurdum

Recall in Part I where we talked about how anything can be derived from a contradiction? Well we actually have a rule of inference for that. In fact, we have two, which can fall under the name Reductio ad Absurdum. The first one we will call \botI (or “Bottom Introduction”):

A\wedge\neg A\vdash\bot

The symbol \bot is called “bottom” and is a logical constant used to represent a contradiction. It is also used to represent a sentence which is always false, regardless of truth values (kind of like the opposite of a tautology), but these concepts are actually identical. You can even make a truth table to prove it.

The second rule is Bottom Elimination (\botE) and has the form:

\bot\vdash C

Which is how we formalize our notion that anything can be derived from a contradiction.

This whole process is what’s referred to as Reductio ad Absurdum which is a form of argument where you assume something is true, in order to show that it leads to a contradiction, and thus can’t actually be true. But we’ll deal with this further in a more philosophically-minded post.

3.3.12. Summary

Because it’s handy to have that all in one place:

A\rightarrow B,A\vdash B (MP)
A\rightarrow B,\neg B\vdash\neg A (MT)
A\vee B,\neg A\vdash B (DS)
A\vee B,\neg B\vdash A (DS)
A\wedge B\vdash A (\wedgeE)
A\wedge B\vdash B (\wedgeE)
A,B\vdash A\wedge B (\wedgeI)
A\vee B,A\rightarrow C,B\rightarrow C\vdash C (\veeE)
A\vdash A\vee C (\veeI)
A\vdash C\vee A (\veeI)
A\leftrightarrow B\vdash A\rightarrow B (\leftrightarrowE)
A\leftrightarrow B\vdash B\rightarrow A (\leftrightarrowE)
A\rightarrow B,B\rightarrow A\vdash A\leftrightarrow B (\leftrightarrowI)
\vdash A\vee\neg A (Taut.)
A\wedge\neg A\vdash\bot (\botI)
\bot\vdash C (\botE)

3.4. Rewriting Rules

In addition to the rules of inference, there are many other rules which can be used to rewrite sentences (or even parts of sentences) in order to make derivations easier.

I like to call these rewriting rules. The interesting thing about them is that they can, themselves, all be rewritten take the form of a biconditional tautology. I’ll show you how at the end. For now, here are some useful rewriting rules:

3.4.1. Commutativity

Commutativity is a general math term used to indicate that it doesn’t matter what order your terms are in. For example, 4+5=5+4=9 tells us that addition is commutative, since it doesn’t matter whether the 5 or the 4 comes first.

Similarly, classical logic has two main commutative operators: \wedge and \vee. Our rewriting rules can be written as:

(A\wedge B)\Leftrightarrow(B\wedge A)
(A\vee B)\Leftrightarrow(B\vee A)

I am using the symbol \Leftrightarrow to indicate that the left side can be rewritten as the right side, and vice versa, without affecting the truth of the sentence.

Now, this might seem obvious, but the reason it’s important to point out is that, while \wedge and \vee are commutative, not everything else is. In particular \rightarrow is not. In order to see this, remember when we discussed the difference between “if” and “only if” in Part II.

3.4.2. Associativity

Associativity is another math term that indicates that it doesn’t matter what order you perform an operation in. Again, consider addition:

(1+2)+4=(3)+4=7=1+(6)=1+(2+4)

It doesn’t matter whether we add 1 and 2 first, or whether we add 2 and 4. That’s why you would be more likely to see this kind of expression simply written as 1+2+4.

Similarly, \wedge and \vee are both associative:

((A\wedge B)\wedge C)\Leftrightarrow(A\wedge (B\wedge C))
((A\vee B)\vee C)\Leftrightarrow(A\vee (B\vee C))

Which is why you will always see a string of \wedge‘s and \vee‘s written without brackets as:

A_1\wedge A_2\wedge\ldots\wedge A_n
A_1\vee A_2\vee\ldots\vee A_n

Again, for an example of an operator that is not associative try making truth tables for the sentences A\rightarrow(B\rightarrow C) and (A\rightarrow B)\rightarrow C.

3.4.3. DeMorgan’s Laws

DeMorgan’s Laws are probably the rewriting rules that you will use most often. They firmly establish the relationship between \wedge,\vee and \neg. They are:

\neg(A\wedge B)\Leftrightarrow(\neg A\vee\neg B)
\neg(A\vee B)\Leftrightarrow(\neg A\wedge\neg B)

This can be easily remembered as “the negation of a conjunction is a disjunction of negations” and vice versa. But seeing why it’s this way is a lot harder. If you haven’t been doing the truth tables up until now, I very strongly encourage you to do this one, as this concept will come up time and time again, and not just in classical logic. It comes up in set theory, quantificational logic, modal logic: you name it! So do the truth tables now and get it out of the way. 🙂

3.4.4. Double Negation

Remember in Part I when I said that anything that isn’t true is false and anything that isn’t false is true? It may have seemed like a stupid thing to say at the time, but here’s why it’s important to point out: because it gives us this rule.

\neg\neg A\Leftrightarrow A

This should be clear enough: if a negation swaps the truth value of a sentence, then its negation should swap the truth value back to what it originally was. Easy, right? See, not all of these have to be painful.

3.4.5. Implicational Disjunction

I’ll level with you, I don’t actually know what this is called, and I couldn’t even find anything online naming it. But it’s important. Super important. This rule is:

(A\rightarrow B)\Leftrightarrow(\neg A\vee B)

In fact, this is so important that many logical systems don’t even bother defining implication, as you can get it simply with negation and disjunction. We’ll see why you might want to do this in the future, but for now just know that these two sentences are equivalent, and it can save you a lot of headaches. This is another one that I would strongly encourage you to do the truth table for.

3.4.6. Rewrites Are Tautologies

I mentioned above that rewriting rules can be rewritten as tautologies (sentences which are always true). Hopefully you’ve already figured this out, but in case you haven’t, in order to do this all you have to do is change \Leftrightarrow into \leftrightarrow and BAM! You have a new sentence which will always be true.

No kidding, try it out!

3.4.7. Internal Rewrites

The biggest difference between rewriting rules and rules of inference is that rewriting rules can be used on the inside of sentences. For example:

(A\wedge\neg\neg B)\rightarrow C

Can be rewritten as:

(A\wedge B)\rightarrow C

Whereas, in order to apply a rule of inference, the entire sentence needs to have the relevant form.

3.4.8. Summary

And once again, because it’s handy to have them all in one place:

(A\wedge B)\Leftrightarrow(B\wedge A) (Comm.)
(A\vee B)\Leftrightarrow(B\vee A) (Comm.)
((A\wedge B)\wedge C)\Leftrightarrow(A\wedge (B\wedge C)) (Assoc.)
((A\vee B)\vee C)\Leftrightarrow(A\vee (B\vee C)) (Assoc.)
\neg(A\wedge B)\Leftrightarrow(\neg A\vee\neg B) (DeM)
\neg(A\vee B)\Leftrightarrow(\neg A\wedge\neg B) (DeM)
\neg\neg A\Leftrightarrow A (DN)
(A\rightarrow B)\Leftrightarrow(\neg A\vee B) (def\rightarrow)

3.5. Questions

You should now be able to answer the following questions.

  1. Use truth tables to show whether the arguments from Part I are valid in classical logic.
  2. Create truth tables showing that all of the rules of inference from section 3.2. are valid.
  3. Give three examples of tautologies.
  4. Give three examples of contradictions.
  5. Create truth tables showing that all of the rewriting rules from section 3.3. are valid.
  6. Without creating a truth table, use DeMorgan’s Laws and Double Negation to show that:
    • (A\wedge B) is equivalent to \neg(\neg A\vee\neg B)
    • \neg(\neg A\wedge\neg B) is equivalent to (A\vee B)
  7. Use truth tables to show that \leftrightarrow is commutative and associative.

Classical Logic II – Symbolic Logic

2.1. Symbolic Logic

Now, logic with words seems straightforward enough, but we don’t necessarily want to always talk about specific arguments. Sometimes we wish to examine the generalized form of an argument in order to determine validity (though not soundness: for that we need a specific argument). In order to accomplish this goal, we use symbolic logic.

The general idea for this is to use variables to represent clauses in our sentences, as well as connectives in order to relate our sentences together. This might seem strange, but think of it kind of like algebra: you have a concrete concept (numbers) which you are generalizing using an abstract concept (variables) and then manipulating them together using various operations (plus, minus, square root, etc…). In our case, we are using variables to represent sentences and manipulating them with what are called connectives.

For now, we will represent these clauses using sentence letters. These will be uppercase roman letters (A,B,C,\ldots,P,Q,R,\ldots,X,Y,Z). So for example, we may represent the sentence “It is raining.” with the variable R. There’s nothing special about the letter R, we could easily use I or S. Under no circumstance will we be dealing with arguments that will use more than 26 clauses, however, if a hypothetical logician were ever to need so many variables, he or she can simply add subscripts to the letters (A_1,A_2,\ldots).

So what do we do to make relate different sentences together? We use connectives.

2.2. Logical Connectives

2.2.1. Truth Tables

In order to explain connectives, we’ll first need to go over the concept of a truth table. Basically, a truth table takes a complex sentence and shows how each of the logical connectives modify the truth of the base sentences.

In order to build one, we lay out all of the different truth assignments to the base (or “atomic”) sentences:

A B A and B
T T
T F
F T
F F

We see here that the truth table for the sentence “A and B” will be based on the truth values of A and B and will be laid out underneath the rightmost column. On the left are the truth values for A and B, in each of the four possible combinations of values. The value “T” stands for “true” and the value “F” stands for “false”. In general, for a truth table with n atomic sentences, there will be 2^n possible combinations of truth values, which means 2^n rows.

So let’s check out our very first connective:

2.1.2. Conjunction

As we saw above, one important connective when building complex sentences is “and”. It is usually written as \& or \wedge. We will be using the \wedge notation. More formally, “and” is known as conjunction.

The sentence A\wedge B is read “A and B” and is true if and only if A and B (also called the conjuncts) are both true. Represented as a truth table:

A B A\wedge B
T T T
T F F
F T F
F F F

Thus, perhaps obviously, A\wedge B is true exactly when A is true and B true, and false at all other times.

2.1.3. Disjunction

Well that was easy enough. What about disjunction, also known as an “or” statement. These are statements which indicate multiple options. For example, the sentence “A or B” is written A\vee B. In this example, A and B are known as the “disjuncts”.

Let’s look at the truth table for a disjunction:

A B A\vee B
T T T
T F T
F T T
F F F

Most of this is fairly straightforward. We would expect “true or false” and “false or true” to both evaluate to true, as well as “false or false” to evaluate to false. But what about “true or true”? Should this really evaluate to true? After all, when you go to a restaurant and a meal comes with “soup or salad” you can’t very well take both. Classical logic, however, was designed to deal with more mathematical concepts. Consider the sentence “either two is even or three is odd”. Clearly such a sentence is true, despite having two true disjuncts. If it still doesn’t make sense, just try to roll with it for now, and later we’ll see a version of “or” called “exclusive or” which behaves more like you might expect.

2.1.4. Negation

Unlike conjunction and disjunction which connects two-sentences, negation is a one-place connective (which is an oddly-named concept). The negation of a sentence A is written \sim A or \bar A or \neg A. We will be using the notation \neg A.

A \neg A
T F
F T

Thus, essentially, a negation in classical logic simply flips a sentence from true to false, or vice-versa.

2.1.5. Implication

Implication is really the most interesting of the connectives, and even tends to do most of the work in proofs. It is a translation of the sentence “if A then B” and is written A\rightarrow B. In this example, A is called the antecedent and B is called the consequent.

A B A\rightarrow B
T T T
T F F
F T T
F F T

This one can take a bit of getting used to. The first two lines behave more-or-less as expected. If both the antecedent and the consequent are true, then the the antecedent truly does imply the consequent, making the implication true. If the antecedent is true, but the consequent is false, then the first cannot truly be said to imply the second, making the entire thing false. The last two lines are a big tricky and perhaps counter-intuitive, but they basically say that if the antecedent is false, then there’s no way to really tell whether it implies the consequent. So regardless of whether the consequent is true or not, the entire implication will be true whenever the antecedent is. It’s a bit confusing at first, but when you give it some thought, it starts to make sense.

2.1.6. Biconditional

Our last connective is called the biconditional and is used to indicate that two sentences always have the same truth value. It is written A\leftrightarrow B and read “A if and only if B.

A B A\leftrightarrow B
T T T
T F F
F T F
F F T

Basically this tells us that whenever A is true, then B is true and whenever A is false B is false, and vice-versa.

2.1.7. Complex Sentences

One note before moving on: we aren’t limited to atomic sentences when applying connectives. We can use progressively more complex sentences and string them together to create arbitrary sentences. For example:

\neg(A\wedge B)\leftrightarrow(\neg A\vee\neg B)

is a valid sentence, even though we are connecting together progressively more complicated things.

2.2. Symbolizing Arguments

Let’s try symbolizing our very first argument. If you’ll recall:

  • If it rains, then the grass will be wet.
  • It is raining.
  • Therefore, the grass is wet.

By symbolizing “It is raining.” as R and “The grass is wet.” as W, we arrive at the following:

  • P1) R\rightarrow W
  • P2) R
  • C) W

This type of argument has a formal name modus ponens although that’s not terribly important right now. Basically it says that if we know R then we can infer W (P1). However we do know R (by P2), therefore we can conclude W. One important thing to note is that we don’t care about the tenses of our sentences. R stands equally well for “it is raining” and “it rains”; while W stands for both “the grass will be wet” and “the grass is wet”.

Now, let’s consider some words in natural English and their logical translations.

2.2.1. “But”

In plain English, the word “but” means something different than the word “and”. For example, the sentence “I want to go to a movie, but I’m too tired” has a slightly different meaning than “I want to go to a movie and I’m too tired.” Specifically, “but” tends to be used to imply a slightly different situation than what would be intuited from the first clause. In our example, I want to go to a movie, but when I say I’m too tired, it implies that I might not go. Logically speaking, however, “but” and “and” are identical.

Why is this? Consider the sentence “John went to the mall, but Mary did not.” What is required in order for this sentence to be true? Well, namely it requires first that John went to the mall, and second that Mary didn’t. When all we are considering is the truth values of the atomic sentences (as we do in logic) “but” ends up having the same truth-functional requirements as “and”. As such, the two are logically equivalent and in particular we might translate this sentence as J\wedge\neg M.

2.2.2. “Then”

“Then” can be a tricky concept. In plain English, sometimes it can mean an implication, but sometimes it can also be a conjunction. The key in determining this is typically the word “if”. When you see something like “If it rains, then I will bring an umbrella.” we can hopefully see fairly easily that this will take the logical form of R\rightarrow U. However a sentence such as “I went to the grocery store, then I bought some milk.” is a different creature. In this case, we are not saying that milk will be bought if I go to the grocery store, but rather I did go there, and then I bought the milk. Thus, such a sentence would be translated as G\wedge M. This can be tricky, but a careful analysis of the meaning of the natural English sentence should be enough to determine which situation to apply.

2.2.3. “Nor”

This one can be hard to translate. When you say something like “I will neither give you a bus ticket, nor lend you my car.” the translation is not immediately obvious. What connective is “nor”? But the simplest way to examine this is to look at the word itself: n-or. As you might guess, “nor” is translated as a negated “or” statement. Thus our statement above becomes \neg(B\vee C). And this makes sense. If B\vee C were true, then I must be either taking one of the two options. But if I am doing neither, then the opposite of this sentence will be simply its negation. Later we will see that this sentence is actually equivalent to another version: \neg B\wedge\neg C.

2.2.4. “Only if”

This is easily the hardest translation we’ll be talking about today. If you’ll recall from above, the translation of the sentence “if A, then B” is A\rightarrow B. Similarly, the translation for “B, if A” is also A\rightarrow B. So you might be inclined to think that the antecedent will always be whichever term follows the word “if”. However this would be incorrect.

The sentence “B, only if A” is actually translated as B\rightarrow A. But why is this? Well the simplest thing to do is to make a truth table the two versions of the sentence:

A B A\rightarrow B B\rightarrow A
T T T T
T F F T
F T T F
F F T T

In the last column, we can see that in all rows where B\rightarrow A is true (ie: ignore the third row), B is only true when A is true. On the other hand, if we were to translate “B, only if A as A\rightarrow B then we would run into problems as in the third line (where A\rightarrow B is actually true), B actually holds without A holding.

So while it may seem that this translation violates the temporal ideas of cause coming before effect, it is important to note that, because we are dealing purely with logic, all we care about are the truth values of the component sentences, and that of the final result. With this in mind, one can hopefully see why the correct translation is B\rightarrow A.

2.2.5. “One or the other, but not both”

As mentioned above, in a standard disjunction, if both of the terms are true, then the resulting sentence is true. However this may not always accurately represent what we mean in spoken English. Take, for example, the sentence “John was either born on a Tuesday or a Friday.” Now, clearly we don’t mean that both of these could possibly be true. Should we wish to indicate that in our sentence’s logical form, however, this information will be lost. So what can we do?

Well, with a little bit of thought, it’s easy to see that such a sentence means the same thing as “John was either born on a Tuesday or a Friday, but not both.” With this translation, it becomes much easier to see what we’re looking for. Thus, the correct translation of this sentence would be (T\vee F)\wedge\neg(T\wedge F). This may look complicated, but if you stare at it closely enough, you begin to see how it works: in the first part, we are using our original statement that either John was born on a Tuesday or a Friday; but in the second part we are clarifying that it is not the case that both took place.

This also allows us to work out the problem mentioned about with getting soup or salad: (O\vee A)\wedge\neg (O\wedge A). You can get soup, or you can get salad; but you’re not able to get both soup and salad.

2.3. Questions

You should now be able to answer the following questions.

  1. Translate all the arguments presented in section 1 into their logical forms.
  2. Translate the following sentences into their logical forms:
    1. If John and Michael both ask Louise to the dance, then she will either have a date or will not go.
    2. The statement “two is even” is equivalent to the statement “two plus one is odd”.
    3. If I will neither drink Coke nor Pepsi then I shan’t drink Pepsi and I shan’t drink Coke.
    4. Sharon will go to the cinema, but only if Bill pays for her ticket and buys her some popcorn; but Angela will go either way.
  3. Create truth tables for the following sentences:
    1. (A\rightarrow B)\rightarrow B
    2. \neg(A\wedge B)
    3. A\wedge\neg A
    4. A\vee\neg A
    5. ((A\rightarrow(B\vee C))\rightarrow(A\rightarrow(D\leftrightarrow B)))\vee D
  4. Use truth tables to show that the following sentences are equivalent and explain why a biconditional can be read as “if and only if” or “is equivalent to”:
    • (A\rightarrow B)\wedge(B\rightarrow A)
    • (A\wedge B)\vee(\neg A\wedge\neg B)
    • A\leftrightarrow B
  5. Use a truth table to show that the sentence (A\vee B)\wedge\neg(A\wedge B) can also be translated as \neg(A\leftrightarrow B).

Classical Logic I

1.1. Introduction

Alright, let’s kick this off with some easy stuff.

Before we begin, though, I want to talk briefly about what logic actually is. Logic is a system of inference that allows you to derive conclusions from premises. That’s pretty much it. Most of the time when you apply logic, you deal with conclusions and premises that look something like:

  • Socrates is a man.
  • All Men are mortal.
  • Therefore Socrates is mortal.

Most of the time when you’re talking about logic, however, you use variables (just like math! yay!) in order to generalize the forms of the arguments you’re interested in. This is what you’ll mostly see later, although for now I’m going to stick with full sentences in order to keep things clear(-ish).

Some people try to claim that logic is simply one thing: a collection of logical absolutes that are demanded by the universe, written into the fabric of existence or other such nonsense. Truth is, logic has many different models, and each of these models can be used to reflect something different in reality. It may seem like a universal requirement that you can’t have a sentence and its negation both be true, but believe it or not there are logics which allow that. Perhaps even more surprisingly, they have real world applications!

But for now we’re going to stick with what’s called classical logic. It’s what people tend to think of when they think about logic (certainly, it tends to be what gets taught in introductory logic courses). Basically it was designed as a way to reason about mathematics. Thus, it has certain properties that you’d expect: it is non-contradictory, non-constructivist, immutable, sound and complete (among other properties). Don’t worry if you don’t know what these words mean. You can either look them up on the Internet or wait for me to get around to explaining them some day.

1.2. Arguments

1.2.1. Premises and Conclusions

I said before that logic is a system of inference that allows you to derive conclusions from premises. Let’s work out exactly what that means. A system of inference is a collection of rules used to form conclusions. The system of inference we will be talking about is called Classical Logic and is usually written without capital letters. There’s a reason it’s called “classical”, which is also the same reason it’s taught at the introductory level: it reflects a lot of our intuitions about logic. Many of the rules in classical logic will seem “obvious” and you may not be able to imagine how they could be otherwise. That’s both good and bad. Good, because it makes classical logic easier to grasp; but bad because it can make it more difficult to learn more complicated systems later on. Don’t worry too much about this, but try to keep it in the back of your mind.

So what are premises? Basically these are the building blocks of an argument. Premises are a set of “assumptions”, usually taken for granted for the purposes of the argument. Sometimes they can be painfully obvious: “The sun is hot.” or “One plus one equals two.”, for example. Other times, they can be more difficult to verify: “Taking a life is morally reprehensible in all situations.”  The important thing to remember is that they’re called “assumptions” for a reason: the point of the argument is, assuming your premises, how do you establish your conclusion.

Finally, a conclusion is what you’re building towards using your premises. It is much more likely to be controversial than your premises, however establishing the truth of a controversial conclusion is much easier than establishing that of a controversial premise: that’s exactly the point of the argument.

So let’s consider the canonical example:

  • P1) Socrates is a man.
  • P2) All men are mortal.
  • C) Therefore, Socrates is mortal.

In this argument, we see the premises (P1 and P2) and the conclusion (C). Now, if we assume that P1 and P2 are both true (as we should, given their status as assumptions) then it’s hard to argue with C. But are P1 and P2 true? It’s difficult to say. Is Socrates truly a man if he is dead? Perhaps he never even truly existed? Are all men indeed mortal? Maybe science can someday achieve immortality. Should these doubts affect our argument? In the grand scheme of things, perhaps. However all we are considering is whether the premises establish the conclusion. In our case, I hope it is clear to see that they do so.

1.2.2. Valid Arguments

Now, clearly, not all arguments are going to be good ones. Thus, in order to determine how much attention to give to an argument, we need a way of assessing it. This assessment is going to be called validity. A valid argument is one whose conclusion must necessarily follow from its premises. In other words, if it is impossible to imagine a world where the premises of an argument are true and its conclusion false; then the argument is valid. Similarly, if you can imagine such a world, the argument is said to be invalid.

Let’s see some examples:

  • P1) If it rains, the grass will be wet.
  • P2) It is raining.
  • C1) Therefore, the grass is wet.

and:

  • P3) If it rains, the grass will be wet.
  • P4) The grass is wet.
  • C2) Therefore, it is raining.

Now, one of these is a valid argument and one is not. Can you figure out which is which? Seriously, think about it for a minute. I’ll wait.

Did you say that the first one is valid and the second one invalid? If so, you’ve just taken your first step into a larger world! Buy why is that? In the first example, we can see that it is impossible for P1 and P2 to both be true and have C1 be false. We can ignore things like the possibility of a tarp or an awning because P1 guarantees us that if it is raining, then the grass will bet wet. Since P2 tells us it is raining then the only possibility is that the grass is wet. But why isn’t the second argument valid? Sure the grass will be wet  if it rains (P3), but just because the grass is wet (P4) doesn’t mean it is raining (C2). Suppose it is sunny out and we have the sprinkler running on the lawn? Then P3 and P4 will be true, but C2 will not be. That makes this an invalid argument.

Let’s try a harder one:

  • P5) Either the sky is made of marshmallows or the capital of Ohio is Columbus.
  • P6) Columbus is not the capital of Ohio.
  • C3) Therefore, the sky is made of marshmallows.

and:

  • P7) The sky is blue.
  • P8) The ocean is blue.
  • C4) Therefore, sapphires are also blue.

What do you think? Which one is valid and which one is invalid? If you said the first one is invalid and the second one is valid, then you are… WRONG! But don’t worry about it: that’s the point of trick questions. They’re how we learn.

So what makes the first of these two arguments valid? Clearly it’s crazy: the sky isn’t made of marshmallows! Columbus is the capital of Ohio! Who invented this crazy logic-stuff anyways? Well let’s think about the definition of validity. At any point does it say anything about whether the conclusion must actually be true? It may seem like it, but read it carefully: “if it is impossible to imagine a world where the premises of an argument are true and its conclusion false; then the argument is valid.” So can we imagine a world where Columbus is not the capital of Ohio? Sure, that’s easy! Maybe in this world it’s Cleveland. Why not? Can we imagine a world where either the sky is made of mashmallows or the capital of Ohio is Columbus? That shouldn’t be too hard either (in fact, we live in such a world). But can we imagine a world where both those things are true? It might seem bizarre, but there’s nothing really stopping us if we put our mind to it. But in this world, if Cleveland is the capital of Ohio and then either the capital of Ohio is Columbus or the sky is marshmallows, then I guess the sky must be marshmallows. It sounds weird, but if you give it some thought, it really does make sense. Kind of. Trust me.

Well what about the other argument: clearly the sky, the ocean and sapphires are all blue. So what makes this argument invalid? Well it’s basically the opposite of the last one. Imagine a world where the sky is still blue, the ocean is still blue but sapphires are (for some reason) yellow. Easy right? The argument must be invalid. In fact, an early clue towards this might be that the conclusion has nothing to do with the premises. That’s called a non-sequitur and is usually a good sign for when you’re looking at an invalid argument. Although as we’ll see, there’s a quirk in classical logic where this doesn’t always invalidate an argument. We’ll look at that in a bit.

So we’ve seen that valid arguments can have false conclusions and invalid arguments can have true conclusions. You might be asking yourself (or, more likely, me) what the hell use this notion of validity is in the first place!? Which brings us to…

1.2.3. Sound Arguments

Validity is a property examining the structure of an argument. Soundness is the property that you actually want to use to determine in order to judge an argument. A sound argument must fulfill two criteria:

  1. It must be valid. (Hah! You just learned what that was! Awesome! You’re so ahead!)
  2. Its premises must be true.

So if you ever want to critique an argument, the real thing you want to look at is soundness. Because if the premises are true, and the premises imply the conclusion, then the conclusion must be true! (Look, ma, I just made an argument about arguments!)

What about our four arguments above? Well numbers 2 and 4 are clearly out, as they weren’t even valid. Thus, they must not be sound. Number 3 is also out because of this Columbus is the capital of Ohio (oddly enough, it’s this and not P5 that makes the argument unsound, but you’ll see why later on). So what about number 1? Well, it depends. For me, right now as I write this, it happens to be a sound argument. For you, where you are as you read this, it might not be. So maybe that’s a bit confusing. Let’s see if we can find a better example:

  • P1) The sun is very hot.
  • P2) Touching something very hot can burn you.
  • C) Touching the sun can burn you.

Hard to argue with: P1 and P2 are both true and necessarily imply C. Impractical though it may be, I think we’ve found our first sound argument! (You know, at least until the sun burns out, or we invent a ridiculous full body version of the grill glove…)

1.3. Non-Contradiction

One of the important notions of classical logic is that of non-contradiction. It’s a fairly intuitive concept, basically stating that something can’t be both true and not true (ie: false) at the same time. Or to put it another way, a sentence and its negation cannot both be true at the same time. So for example, the statements “I am hungry” and “I am not hungry” cannot be simultaneously true. Similarly, the sentence pairs:

  • “One plus one equals three”/”One plus one does not equal three.”
  • “I have better things to do than to read this.”/”I do not have better things to do than to read this.”
  • “Logic is stupid.”/”Logic is not stupid.”

are all contradictory. Thus, exactly one of each of them must be true. (I’ll leave you to decide which of each of the pairs is which.)

This has some odd results when it shows up in arguments, which can be split up into two parts: contradictions appearing among premises and contradictions appearing in conclusions.

1.3.1. Contradictions in Premises

There’s a strange little feature in classical logic whereby anything can be derived from a contradiction. As such, an argument like:

  • P1) The sky is blue.
  • P2) The sky is not blue.
  • C) You are being eaten alive by dinosaurs RIGHT AS WE SPEAK!

is actually (oddly) a valid argument.

“Oh, that’s bullshit, I’m out of here!” I hear you saying. Well hang on a minute, let’s take a moment to consider this for a second. Let’s think again about what it means for an argument to be valid: if the premises of the argument are true, then the conclusion must necessarily also be true. Okay, great. So let’s imagine a world where the sky is blue. And also not blue. Wait, okay… that doesn’t make sense. No such world could possibly exist, right? But the weird thing about non-existent things is that we can say whatever we want about them.

It’s just like God. He doesn’t exist, so we can easily say that he’s omnipotent, omnibenevolent, omniscient, a dinosaur, a transvestite, and enjoys the occasional hamburger. Oh, and he also exists (my best summary of the ontological argument). You can say whatever you want about him. At the end of the day, it doesn’t matter because once you’ve already established non-existence (even simply as a premise, as in the God case), anything else you say about the subject just doesn’t matter.

Okay, so back to our world of the blue-and-yet-not-blue sky. It doesn’t exist. It can’t! So we might as well say whatever we want about it, including how delicious you taste to the velociraptor standing… right… BEHIND YOU!

Heck, you can even use it to derive other contradictions. Which brings us to…

1.3.2. Contradictions in Conclusions

Classical logic has a property known as soundness. While confusing, this is different from the property of soundness that an argument may or may not have. Basically some philosopher decided that it would be funny if two surface-unrelated concepts had the same name, and also made them both likely to come up in a gentle introduction to logic.

So what does this new version of soundness say? Basically it tells us that anything derived from true premises is also true.  But since, as we know, a contradiction cannot be true, having a contradiction in your conclusion means either that your premises are contradictory (since anything can then be derived from them) or that you made a mistake along the way. Either way, having contradictory conclusions will give you a lot of information, so keep an eye out for them.

1.4. Excluded Middle

One important bit that will seem obvious when I say it, but warrants saying anyways is a bit about truth values and the law of the excluded middle.

What is a truth value? Basically it’s a the status regarding the truth of a sentence. A truth value can only be one of two things: true or false. Any given sentence is one or the other: it cannot be both nor can it be neither.

Any sentence which is not true is false; and any sentence which is not false is true. This may sound confusing, if only because it’s so obvious that you’re wondering why the heck I’m bothering to say it, and that’s a fair question. In classical logic, truth behaves almost exactly as you’d expect it to. And so to avoid confusing you, I will say no more except to warn you that there are variations of logic which avoid this principle. But they’re way off in the future. For now just think of truth as a binary thing, the negation of which is false.

1.5. Sentences

I just want to wrap up quickly. There’s a concept in introductory logic that always gets mentioned, and I’d be remiss if I didn’t do so myself. Basically that concept is sentences.

Now, this won’t matter so much later on when we’re dealing with sentences that look more like this:

(a\wedge b)\rightarrow(c\vee (d\wedge a))

but there’s a slight issue about what sentences actually are. Basically, every introductory logic textbook I’ve ever seen defines them as this: a sentence is a sequence of words which can either be true or false and not both.

This might be confusing, but we’ve seen enough premises and conclusions today to give you an idea of what sentences look like. So just for clarity, here are a few things that aren’t sentences, at least not as far as logic is concerned:

  • S1) “Hello!”
  • S2) “Please go over there.”
  • S3) “Purple squid monkey penis!”
  • S4) “This sentence is false.”

S1 and S2 are (perhaps obviously) not considered sentences because they cannot be either true or false. Certainly, they have meaning, but you can’t really apply them in an argument. S3 is a complete non sequitur and is so devoid of meaning as to be utterly useless when trying to discuss truth.

S4 raises some interesting questions, however. It seems like the kind of thing that should have a truth value. But what value ought it to have? Let’s say it’s true. Well if it’s true, that immediately means it must be false. So it’s false? But if that’s the case then its falsehood makes itself true. And you just keep going around in circles trying to figure it out. Thus, not only does it not have a truth value, but it cannot have any truth value. Which is kind of neat when you think about it. (And for those of you wondering, yes, this is where the joke in Portal 2 comes from. See, learning is fun and it gives you a better appreciation for the things you already love!)

1.6. Questions

You should now be able to answer the following questions:

  1. What are the two parts of an argument?
  2. Describe in your own words what a valid argument is.
  3. Give two examples of valid arguments: one with true premises and one with false premises.
  4. Give two examples of invalid argumens: one with a true conclusion and one with a false conclusion.
  5. Give an example of a sound argument with false premises.
  6. What kind of premises allow you to derive a contradiction?
  7. How many different truth values can sentences possibly have? What are they?
  8. Give an example of a meaningful sequence of words that is not a sentence.