Classical Logic III – Rules of Inference

3.1. Validity and Truth Tables

So it’s all well and good to be able to symbolize the sentences in an argument, but how does that help us when we’re trying to determine validity? Well let’s look at two related concepts that we’ve just learned and see if that can help us.

Remember the truth table for \rightarrow? It’s okay if you don’t, I’ll give it to you here:

A B A\rightarrow B
T T T
T F F
F T T
F F T

So basically, an implication is false only when the antecedent is true and the consequent is false.

Does this sound familiar? What was our definition of validity again? If it is impossible for the premises to be true, and the conclusion false, then the argument is valid. Can you see the connection? True implications have get true consequents from true antecedents and valid arguments get true conclusions from true premises. So how can we use truth tables to prove an argument is valid? Consider an example:

The argument:

  • If it rains, then the grass will be wet.
  • It is raining.
  • Therefore, the grass is wet.

Can be symbolized as:

  • P1) R\rightarrow W
  • P2) R
  • C) W

But consider the sentence: ((R\rightarrow W)\wedge R)\rightarrow W where what we’ve done is taken the conjunction of the argument’s premises and then stated that they imply the argument’s conclusion. Let’s examine the truth table for this sentence:

R W R\rightarrow W (R\rightarrow W)\wedge R ((R\rightarrow W)\wedge R)\rightarrow W
T T T T T
T F F F T
F T T F T
F F T F T

Notice that, regardless of the truth values of R and W, our final sentence turns out to always be true. But consider what that sentence means: that the premises of our argument imply our conclusion. Since the sentence is always true, this means that the premises necessarily imply the conclusion. And this is simply a definition of validity.

So, in general, if we want to determine whether an argument is valid, we can use this method of creating a sentence by making a conjunction of our premises, and using that as the antecedent in an implication, with the conclusion as the consequent. Then simply make a truth table for that sentence, and if it holds in all possible truth assignments, we know we have a valid argument. Neat!

Let’s quickly look at an invalid argument, just so that you can see the difference. This sort of example is called a non-example and is often useful for visualizing what you’ve just learned. Consider the argument:

  • P3) If it rains, the grass will be wet.
  • P4) The grass is wet.
  • C2) Therefore, it is raining.

As we discussed in Part I, this argument is invalid. So let’s symbolize the whole argument as ((R\rightarrow W)\wedge W)\rightarrow R and examine the resulting truth table.

R W R\rightarrow W (R\rightarrow W)\wedge W ((R\rightarrow W)\wedge W)\rightarrow R
T T T T T
T F F F T
F T T T F
F F T F T

Uh-oh… It looks like when R is false and W is true, the whole argument falls apart. Consider what this means: it is not raining, but the grass is still wet. This was exactly the reasoning we gave in Part I for why this was invalid. Neat how that works out. So given that, a valid argument must always have its resulting implication evaluate to true.

3.2. Rules of Inference

So we now know that some arguments are valid, some are invalid, and we can determine which are which by using truth tables. But the problem with deciding which arguments are valid that is you need to what their conclusions are ahead of time. This is not how critical thinking works. Usually we just want to start out with a few premises and see where these get us, while preserving the validity of what you come up with. This process is called inference or deduction, and can use a number of rules in order to pull it off. Let’s take a look at some of them:

3.2.1. Modus Ponens

Modus Ponens (or MP) is one of the most common tools in deduction, and says the following. If you have premises of the form $\latex A\rightarrow B$ and A then you are able to conclude B. Visually, we write this as:

A\rightarrow B,A\vdash B

Where the \vdash symbol is read as “entails” and means that if all the sentences on the left are true, then so is the sentence on the right. Or, put another way, if we know the sentences on the left, then we are able to deduce the sentence on the right.

This isn’t just some crazy thing I made up either: you can check whether it’s true or not… by using a truth table!

A B A\rightarrow B (A\rightarrow B)\wedge A ((A\rightarrow B)\wedge A)\rightarrow B
T T T T T
T F F F T
F T T F T
F F T F T

The right-hand column is always true, so MP must be a valid form of argument! In fact, if you look closely, this is the exact same table from our example of a valid argument above: you’ve already been using MP and you didn’t even know it!

It’s good practice to try making these tables, so I’m going to leave them out for the rest of this section. But don’t let that fool you: you should absolutely try to make them yourself. It’s important to be able to do this kind of stuff if you have any interest in logic.

3.2.2 Modus Tollens

Modus Tollens (MT) is kind of the reverse of MP in that it takes an implication and the truth value of the consequent in order to infer the truth value of the antecedent. Its logical form is:

A\rightarrow B,\neg B\vdash\neg A

Put another way, the only time that an implication can have a false consequent is when the antecedent is also false. Thus, we can infer that this is the case. Try doing the truth table for it. Seriously.

3.3.3. Disjunctive Syllogism

Disjunctive Syllogism (DS) says that, given a disjunction, if one of the disjuncts is false, then the other one must be true. This technically has two forms:

A\vee B,\neg B\vdash A
A\vee B\neg A\vdash B

This one can be tricky to visualize, but if you do the truth table it becomes fairly obvious as to why this is valid. Doooo iiiitttt!

3.3.4. “And” Elimination

“And” Elimination (\wedgeE) is a rule that states that if a conjunction is true, then so are its conjuncts. It also technically has two forms:

A\wedge B\vdash A
A\wedge B\vdash B

There’s not much to say about this one: if two things are true together, then obviously both of them are true on their own.

3.3.5. “And” Introduction

Similarly, if two things are true on their own, then both of them will be true together. This is what “And” Introduction (\wedgeI) states.

A,B\vdash A\wedge B

3.3.6. “Or” Elimination

Okay, this one’s a bit weird. Bear with me. First I’m going to give you its logical form:

A\vee B,A\rightarrow C,B\rightarrow C\vdash C

“Wait, where did this C business come from?” I hear you asking. Well if all we have to work from in an “Or” Elimination (\veeE) is a single disjunction, then there’s not really any way of figuring out which of the two disjuncts are true. BUT! If we know that C (whatever it happens to be) is implied by both A and B then regardless of which one is actually true, C will hold either way.

It’s confusing, I know. That’s why you’re supposed to be doing the truth tables for these. Honestly, they make sense once you do.

3.3.7. “Or” Introduction

Not nearly as complicated is “Or” Introduction (\veeI). It has the forms:

A\vdash A\vee C
A\vdash C\vee A

Again, C springs out of nowhere, but it kind of makes sense this time. If we know that A is true, then we can attach whatever we want to it without making a difference, so long as we’re saying that what we’re attaching is either true, or A is (which it is).

3.3.8. “Iff” Elimination

If you’ll recall from Part II, “iff” is a short hand for “if and only if” and is used to describe the connective \leftrightarrow. Thus, “Iff” Elimination (\leftrightarrowE) is:

A\leftrightarrow B\vdash A\rightarrow B
A\leftrightarrow B\vdash B\rightarrow A

Which basically formalizes the idea that a biconditional indicates that both sides imply eachother.

3.3.9. “Iff” Introduction

“Iff” Introduction ($\leftrightarrow$I) is just the reverse of \leftrightarrowE and states:

A\rightarrow B,B\rightarrow A\vdash A\leftrightarrow B

Plainly, if two sentences imply each other, then they are equivalent.

3.3.10. Tautology

A tautology is a sentence which is always true. The canonical example is A\vee\neg A, since no matter what, exactly one of those will always be true (making the whole thing true). Other ones include A\rightarrow A or many more, which we’ll see below. But the neat thing about tautologies is that they can be inferred from nothing:

\vdash A\vee\neg A

Remember that this means that if everything on the left is true (which it always will be, trivially) then the sentence on the right will be true (which it always will be, again)

This actually gives us a new notion of validity for sentences instead of arguments. A sentence is valid if and only if it is true under all truth value assignments. This also means that an argument can be said to be valid if and only if its logical form is valid.

3.3.11. Reductio ad Absurdum

Recall in Part I where we talked about how anything can be derived from a contradiction? Well we actually have a rule of inference for that. In fact, we have two, which can fall under the name Reductio ad Absurdum. The first one we will call \botI (or “Bottom Introduction”):

A\wedge\neg A\vdash\bot

The symbol \bot is called “bottom” and is a logical constant used to represent a contradiction. It is also used to represent a sentence which is always false, regardless of truth values (kind of like the opposite of a tautology), but these concepts are actually identical. You can even make a truth table to prove it.

The second rule is Bottom Elimination (\botE) and has the form:

\bot\vdash C

Which is how we formalize our notion that anything can be derived from a contradiction.

This whole process is what’s referred to as Reductio ad Absurdum which is a form of argument where you assume something is true, in order to show that it leads to a contradiction, and thus can’t actually be true. But we’ll deal with this further in a more philosophically-minded post.

3.3.12. Summary

Because it’s handy to have that all in one place:

A\rightarrow B,A\vdash B (MP)
A\rightarrow B,\neg B\vdash\neg A (MT)
A\vee B,\neg A\vdash B (DS)
A\vee B,\neg B\vdash A (DS)
A\wedge B\vdash A (\wedgeE)
A\wedge B\vdash B (\wedgeE)
A,B\vdash A\wedge B (\wedgeI)
A\vee B,A\rightarrow C,B\rightarrow C\vdash C (\veeE)
A\vdash A\vee C (\veeI)
A\vdash C\vee A (\veeI)
A\leftrightarrow B\vdash A\rightarrow B (\leftrightarrowE)
A\leftrightarrow B\vdash B\rightarrow A (\leftrightarrowE)
A\rightarrow B,B\rightarrow A\vdash A\leftrightarrow B (\leftrightarrowI)
\vdash A\vee\neg A (Taut.)
A\wedge\neg A\vdash\bot (\botI)
\bot\vdash C (\botE)

3.4. Rewriting Rules

In addition to the rules of inference, there are many other rules which can be used to rewrite sentences (or even parts of sentences) in order to make derivations easier.

I like to call these rewriting rules. The interesting thing about them is that they can, themselves, all be rewritten take the form of a biconditional tautology. I’ll show you how at the end. For now, here are some useful rewriting rules:

3.4.1. Commutativity

Commutativity is a general math term used to indicate that it doesn’t matter what order your terms are in. For example, 4+5=5+4=9 tells us that addition is commutative, since it doesn’t matter whether the 5 or the 4 comes first.

Similarly, classical logic has two main commutative operators: \wedge and \vee. Our rewriting rules can be written as:

(A\wedge B)\Leftrightarrow(B\wedge A)
(A\vee B)\Leftrightarrow(B\vee A)

I am using the symbol \Leftrightarrow to indicate that the left side can be rewritten as the right side, and vice versa, without affecting the truth of the sentence.

Now, this might seem obvious, but the reason it’s important to point out is that, while \wedge and \vee are commutative, not everything else is. In particular \rightarrow is not. In order to see this, remember when we discussed the difference between “if” and “only if” in Part II.

3.4.2. Associativity

Associativity is another math term that indicates that it doesn’t matter what order you perform an operation in. Again, consider addition:

(1+2)+4=(3)+4=7=1+(6)=1+(2+4)

It doesn’t matter whether we add 1 and 2 first, or whether we add 2 and 4. That’s why you would be more likely to see this kind of expression simply written as 1+2+4.

Similarly, \wedge and \vee are both associative:

((A\wedge B)\wedge C)\Leftrightarrow(A\wedge (B\wedge C))
((A\vee B)\vee C)\Leftrightarrow(A\vee (B\vee C))

Which is why you will always see a string of \wedge‘s and \vee‘s written without brackets as:

A_1\wedge A_2\wedge\ldots\wedge A_n
A_1\vee A_2\vee\ldots\vee A_n

Again, for an example of an operator that is not associative try making truth tables for the sentences A\rightarrow(B\rightarrow C) and (A\rightarrow B)\rightarrow C.

3.4.3. DeMorgan’s Laws

DeMorgan’s Laws are probably the rewriting rules that you will use most often. They firmly establish the relationship between \wedge,\vee and \neg. They are:

\neg(A\wedge B)\Leftrightarrow(\neg A\vee\neg B)
\neg(A\vee B)\Leftrightarrow(\neg A\wedge\neg B)

This can be easily remembered as “the negation of a conjunction is a disjunction of negations” and vice versa. But seeing why it’s this way is a lot harder. If you haven’t been doing the truth tables up until now, I very strongly encourage you to do this one, as this concept will come up time and time again, and not just in classical logic. It comes up in set theory, quantificational logic, modal logic: you name it! So do the truth tables now and get it out of the way. 🙂

3.4.4. Double Negation

Remember in Part I when I said that anything that isn’t true is false and anything that isn’t false is true? It may have seemed like a stupid thing to say at the time, but here’s why it’s important to point out: because it gives us this rule.

\neg\neg A\Leftrightarrow A

This should be clear enough: if a negation swaps the truth value of a sentence, then its negation should swap the truth value back to what it originally was. Easy, right? See, not all of these have to be painful.

3.4.5. Implicational Disjunction

I’ll level with you, I don’t actually know what this is called, and I couldn’t even find anything online naming it. But it’s important. Super important. This rule is:

(A\rightarrow B)\Leftrightarrow(\neg A\vee B)

In fact, this is so important that many logical systems don’t even bother defining implication, as you can get it simply with negation and disjunction. We’ll see why you might want to do this in the future, but for now just know that these two sentences are equivalent, and it can save you a lot of headaches. This is another one that I would strongly encourage you to do the truth table for.

3.4.6. Rewrites Are Tautologies

I mentioned above that rewriting rules can be rewritten as tautologies (sentences which are always true). Hopefully you’ve already figured this out, but in case you haven’t, in order to do this all you have to do is change \Leftrightarrow into \leftrightarrow and BAM! You have a new sentence which will always be true.

No kidding, try it out!

3.4.7. Internal Rewrites

The biggest difference between rewriting rules and rules of inference is that rewriting rules can be used on the inside of sentences. For example:

(A\wedge\neg\neg B)\rightarrow C

Can be rewritten as:

(A\wedge B)\rightarrow C

Whereas, in order to apply a rule of inference, the entire sentence needs to have the relevant form.

3.4.8. Summary

And once again, because it’s handy to have them all in one place:

(A\wedge B)\Leftrightarrow(B\wedge A) (Comm.)
(A\vee B)\Leftrightarrow(B\vee A) (Comm.)
((A\wedge B)\wedge C)\Leftrightarrow(A\wedge (B\wedge C)) (Assoc.)
((A\vee B)\vee C)\Leftrightarrow(A\vee (B\vee C)) (Assoc.)
\neg(A\wedge B)\Leftrightarrow(\neg A\vee\neg B) (DeM)
\neg(A\vee B)\Leftrightarrow(\neg A\wedge\neg B) (DeM)
\neg\neg A\Leftrightarrow A (DN)
(A\rightarrow B)\Leftrightarrow(\neg A\vee B) (def\rightarrow)

3.5. Questions

You should now be able to answer the following questions.

  1. Use truth tables to show whether the arguments from Part I are valid in classical logic.
  2. Create truth tables showing that all of the rules of inference from section 3.2. are valid.
  3. Give three examples of tautologies.
  4. Give three examples of contradictions.
  5. Create truth tables showing that all of the rewriting rules from section 3.3. are valid.
  6. Without creating a truth table, use DeMorgan’s Laws and Double Negation to show that:
    • (A\wedge B) is equivalent to \neg(\neg A\vee\neg B)
    • \neg(\neg A\wedge\neg B) is equivalent to (A\vee B)
  7. Use truth tables to show that \leftrightarrow is commutative and associative.
Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: