The difference between logic and mathematics appears to be a difference between objects of abstraction. Logic abstracts from particular sentential units (or propositions), while mathematics abstracts from units of a much more narrow category: only those that somehow relate to quantity or magnitude. So in logic we aim for valid permutations of groups of assertions in natural language (or a translated derivative thereof), while in mathematics we aim for valid permutations of pure quantity (represented by number).

But I believe there is another difference: some operators in mathematics specify an operation to carried out, while the connectives in logic specify no operation, but a mere relationship which collapses into a determinate truth value (assuming the statements are well-formed, translated responsibly, and composed of unambiguous units). But this is not to say that mathematical statements contain no logical element, or that they are always void of relational concepts. This is just to point out that logic proper does not concern itself with operations, let alone operations of quantity. However, one might think that De Morgan’s Laws appear to describe an operation (something similar perhaps to the notion of distributive property in Algebra). But De Morgan’s Laws (in first-order logic, at least) are transformation rules that do not modify the values of the atoms whatsoever.

Here’s an example of one of De Morgan’s rules:

It says merely that from x we can infer y. And here’s an example of distributive property in an arithmetical context:

This too amounts to “from x, we can infer y”. Therefore, I am not intending to say that only logic utilizes transformation rules that are free of mutation (the example above illustrates that this phenomenon appears in mathematics as well). My point is that *after* we’re done translating some logical statement (deriving an equivalent, perhaps more simple statement from a complex one), there is no additional step that amounts to a mutation of logical atoms. If we consider all of the connectives in first-order logic (“and”, “or”, etc.) and the quantifiers (“all”, “some”, etc.) as well as other operators (e.g. “not”), none of these denote an operation that mutates a variable; only the resulting truth-values will be affected. A simple example of how this differs in mathematics can be seen here:

What does it mean when we square some value x? It means that we multiply it by itself. This is mutation. Pure logic, on the other hand, focuses *only* on inference patterns between sets of propositions where the primary question is not “what is the resulting value?” but “from {a, b, and c} can we infer x?”

The difference might be much easier to cognize if viewed from a pragmatic perspective rather than a theoretical one. We use math to make calculations pertaining to all varieties of magnitude: weight, distance, velocity, rate of change, etc. Logic, by itself, does not have any “built-in” features to carry out any of that. Logic is used to study valid inference patterns that hold, typically, between assertions…for example, between a set of premises and a conclusion. This is why philosophy has never relied much on calculus, and engineering has never relied too heavily on first- or higher-order logic.

If I’m wrong about any of this, forgive me! Luckily, errors in this domain usually never result in bodily harm…and I’m fairly certain that mis-judgments of this nature will not have an influence on reincarnation details or my status in the afterlife.