Computer Science Canada How does a computer compare numbers?  | 
  
| Author: | Insectoid [ Fri Dec 19, 2008 10:16 am ] | 
| Post subject: | How does a computer compare numbers? | 
How does a computer actually compare numbers to be greater than or less than each other? In actual binary/cpu/whatever stuff, not programming. My guess is that is would subtract the 'smaller' number from the 'larger' number in terms of the comparison, then use the negative/positive bit as the result. ex. 3>5 in this case, 3 would be the 'larger' number because the comparison is asking if 3 is larger than 5. 3-5 returns -2. Take the negative bit as the answer (is that bit high for positive or negative?) and return it as the true/false result. This would, however, not account for equality. I'm just curious as to how it actually works.  | 
	|
| Author: | OneOffDriveByPoster [ Fri Dec 19, 2008 11:02 am ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Subtraction is exactly it. I believe comparisons are usually thought of as less-than.  | 
	|
| Author: | Insectoid [ Fri Dec 19, 2008 12:05 pm ] | 
| Post subject: | RE:How does a computer compare numbers? | 
I was right? Yay! Usually my assumptions are more complicated than the actual solution...I am quite pleased with myself! Huzzah! Newbie God-dom! And with minimal spammage!  | 
	|
| Author: | OneOffDriveByPoster [ Fri Dec 19, 2008 12:26 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Just to elaborate: A < B is already using < A > B can be rewritten B < A A <= B can be rewritten !(A > B), so !(B < A) A >= B can be rewritten !(A < B)  | 
	|
| Author: | matt271 [ Mon Apr 06, 2009 2:11 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
why would it subtract? that seems like a lot of work for a simple check what if u just checked for the first most significant bit that is a 1. if it comes first in 1 number that number is bigger. if it comes at the same time, checked the 2nd, 3erd, etc. until it finds the first difference this is how i would do it 3 -> 0011 5 -> 0101 0011 0101 neither are 1 so keep checking 0011 0101 second one is a 1 therefore 5 is bigger than 4 or, 10 -> 1010 11 -> 1011 1010 1011 both are 1 so keep checking 1010 1011 neither are 1 so keep checking 1010 1011 both are 1, so keep checking 1010 1011 sec one is a 1 therefore 11 is bigger than 10  | 
	|
| Author: | Tony [ Mon Apr 06, 2009 2:38 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
matt271 @ Mon Apr 06, 2009 2:11 pm wrote: if it comes first in 1 number that number is bigger. 
How would you handle negative numbers?  | 
	|
| Author: | md [ Mon Apr 06, 2009 4:20 pm ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Doing subtraction is easy in hardware - the ALU is there, and us used for lots of things. Comparing bits is hard - the hardware is very specific and not there. And subtraction with transistors can be thought of as atomic. It takes one unit of time. Your bitwise comparison would be just as fast or could indeed be slower.  | 
	|
| Author: | Zren [ Mon Apr 06, 2009 4:51 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Tony @ Mon Apr 06, 2009 2:38 pm wrote: matt271 @ Mon Apr 06, 2009 2:11 pm wrote: if it comes first in 1 number that number is bigger. 
How would you handle negative numbers? Aren't negative numbers just the first half of the maximum of an integer? Like say 256 is the maximum numbers possible if we were using whole numbers (Only Positive Integers). 00000000 = -128 10000000 = 0 11111111 = 127 So in total there is 256 possibilities. So in a sense, matt271's checking alghorythm would still work. PS: 0011 = 3 not 4. I'm not 100% on this...but I always wondered why you could have double the size for a non-negative number.  | 
	|
| Author: | Tony [ Mon Apr 06, 2009 4:57 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Zren @ Mon Apr 06, 2009 4:51 pm wrote: Aren't negative numbers just the first half of the maximum of an integer? 
No, binary numbers are stored as Two's complement Quote: 127 = 0111 1111 1 = 0000 0001 0 = 0000 0000 -1 = 1111 1111 -128 = 1000 0000 Edit: Zren @ Mon Apr 06, 2009 4:51 pm wrote: but I always wondered why you could have double the size for a non-negative number. 
I think you might be referring to signed vs unsigned integers. If an integer is unsigned (no negative numbers), then you could use 1 more bit, which represents one more power of 2. (the range is the same though, just shifted out of the negatives).  | 
	|
| Author: | matt271 [ Mon Apr 06, 2009 6:35 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Zren @ Mon Apr 06, 2009 5:51 pm wrote: PS: 0011 = 3 not 4. 
my bad :] md @ Mon Apr 06, 2009 5:20 pm wrote: And subtraction with transistors can be thought of as atomic. It takes one unit of time. Your bitwise comparison would be just as fast or could indeed be slower. 
interesting. im going to look into exactly how that works. i thought math operations where done by some complicated and highly refined method using bit shifts and boolean operations but yeah i thought of negative numbers and kinda shrugged it off  | 
	|
| Author: | Tony [ Mon Apr 06, 2009 6:49 pm ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Math operations are complicated. But it's all done in one clock cycle. If you are comparing two 32 bit numbres, then your best case scenario is getting a conclusive match on the first try, and worst case is that the numbers are equal. So in the best case, you spend as much time as it would take to do a subtraction; in the worse case the method of checking individual bits is 32 times slower. (Unless there's specialized hardware involved, but you still can't beat 1 clock via subtraction)  | 
	|
| Author: | DanielG [ Mon Apr 06, 2009 7:45 pm ] | 
| Post subject: | RE:How does a computer compare numbers? | 
would subtraction always work? what happens if you have a huge -ve and a huge +ve, subtracting one from the other might result in a number that is greater than maxint or smaller than miniint, which would cause problems.  | 
	|
| Author: | Tony [ Mon Apr 06, 2009 8:17 pm ] | ||
| Post subject: | RE:How does a computer compare numbers? | ||
depends on the hardware, most would detect an overflow. Then the ALU can figure out the result based on the overflow and the high-bit. Edit: Note how the IF condition works correctly, but the program crashes at the last line 
  | 
	|||
| Author: | matt271 [ Mon Apr 06, 2009 8:25 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
this is pretty interesting. i didnt realize the cpu could do stuff like subtract two integers in 1 clock. is clock the unit of speed of a cpu? like does 500MHz = 500 million clocks per sec? i remember somebody telling me that 500MHz means 500 million "operations" per second and that "1+1" would be *several* operations. i want to learn this kind of thing. im only first year cs and we havnt jumped into anything interesting yet  | 
	|
| Author: | Tony [ Mon Apr 06, 2009 8:49 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
matt271 @ Mon Apr 06, 2009 8:25 pm wrote: i remember somebody telling me that 500MHz means 500 million "operations" per second and that "1+1" would be *several* operations. 
Indeed. One way to increase CPU's clock speed is to split instructions into more steps, have each (smaller) step run faster, and thus claim that CPU's speed is faster. Pentium 4s have gotten up to 31 clock steps per instruction. That's why back in the day "slower" AMDs and PowerPCs were outperforming crazy-fast P4s on benchmarks. Although to make things more complicated to understand, this is where pipelineing comes in -- running multiple instructions are the same time.  | 
	|
| Author: | Zeroth [ Mon Apr 06, 2009 11:14 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Ah... see this is awesome. I TA a machine architecture lab. Okay, first, capabilities of hardware. Any boolean expression you can come up with, of the kind where you have M number of inputs and N number of outputs, and a whole table of results, can be built using hardware. This was proven way back in the day that the hardware constructions are equivalent to the basic boolean operations, and thus, symmetric. What this also means is you can have a whole class of inputs and outputs. As in, you can have two sets of inputs that represent numbers that when it goes through the operation, comes out as the sum of the two numbers. Addition, subtraction, and thus comparisons, take only one clock cycle. Now, lets say we have a half-adder circuit. It can add together two 1 bit values, and has their addition+overflow. If we chain the overflow into another half-adder, and do this for 32 bits, we have a full adder for 32 bit integers, with overflow! The only thing is that we have to gate-trigger the half-adders successively ONLY when we have overflow to factor in. That means one clock-cycle costs approximately as long as it takes for all 32 half-adders to trigger sequentially and add. Subtraction works the same way, using half-adders and just taking the negative value of the second argument. However, it is multiplication and division that take much longer. But because of the way modern chips work, they have the appearance of being much faster. Modern chips work in what is called "out-of-order pipelining". They take the instructions and execute them out of order, doing many-clock cycle instructions earlier, so that when you actually reach the instruction, the result is already there. As well, with pipelining, the chips execute instructions at nearly the same time. The first instruction starts being processed, then in a separate pipeline, the next instruction is being processed at the next clock cycle. Its like having 10 loads of laundry, and having 1 washing machine, 1 dryer, and 1 table for folding the laundry. The first load goes into the first washer. Then when the first load is finished, you put in the second load, and put the first load into the dryer. And so on like that. Thats how pipelining works. Each of the different parts of the chip get used all the time, for all the instructions. Just that at any one point in time, each of those different sections are dealing with a different instruction. Theres a lot of associated issues with pipelining and out of order execution, and thats what 75% of the silicon on chips these days deals with. Hardcoded cache algorithms, pipelining and ooo logic. So in reality, addition, subtraction, etc, are so very very very very simple.  | 
	|
| Author: | matt271 [ Mon Apr 06, 2009 11:22 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
wow  | 
	|
| Author: | DemonWasp [ Tue Apr 07, 2009 12:10 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Pipelining is also part of why you would split an individual "instruction" into multiple steps - if you execute one step per clock cycle, you can determine more easily which parts of the CPU are free at any given point to allow instructions to be run "concurrently". This boosts speed immensely. The problem with ever-increasing MHz/GHz ratings is partially due to the increased heat that such a computer produces, and the decreased distance an electrical signal can propagate at that speed. Consider a humble 1GHz CPU - even in that case, you have only 1 / 1 000 000 000 th of a second for each instruction to execute. Light travels a mere 30cm in that length of time; subtract from that distance the time required for your transistors to fire and you rapidly discover that you need to make some very, very short wires. Instruction sets are broadly divided into CISC and RISC, those being "complex" and "reduced" instruction sets. In CISC chips (for example, x86 and x64 CPUs, like we nearly all have in our desktops) use a large number of instructions. This allows for complicated multistep instructions that can accomplish multiple things at the processor level. This includes things like looping and function calls. Meanwhile, RISC chips (like sparc and powerpc processors, usually found only in servers now) use only a few simple instructions. This makes designing and building the CPU easier - and therefore easier to optimise for. It's still unclear which approach is better, though x86/64 seems to have won out on the desktop-computing side, having superior marketing (so often a factor where technical reasons alone are enough to decide).  | 
	|
| Author: | Tony [ Tue Apr 07, 2009 12:24 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Well, considering that x86 chips have a large portion of them dedicated to translating CISC instructions into RISC, before execution, I have a pretty good idea of which approach is better  | 
	|
| Author: | DemonWasp [ Tue Apr 07, 2009 12:30 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Yes, but by the same token, you have compilers for RISC chips expanding fairly elementary operations into a series of dozens of minor operations. I would agree in general that RISC is a better idea, it's just a question of which does better, performance-wise.  | 
	|
| Author: | Tony [ Tue Apr 07, 2009 1:19 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
considering that CISC is RISC instruction set + more, then by definition RISC should be at the very least as fast. CISC was neat when coding in assembly, because complicated instructions were like a short-hand for writing out longer RISC equivalents. Though assuming that we are at leave one level above assembly, we no longer see that distinction. Code could be compiled down to CISC and then interpreted to RISC via hardware, or code could be compiled down to RISC directly.  | 
	|
| Author: | DemonWasp [ Tue Apr 07, 2009 1:55 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Disagree on the first point, agree on the second. RISC is probably, objectively speaking, a better choice now (partially because we can afford the space in our Instruction Cache now). It's not going to happen until x64 bites it though, and that's far-off. It may never really happen until we get an entirely new computer format - probably something built to handle parallelism better. However, if I give you a RISC (add, subtract) and a CISC (add, subtract, multiply, divide) and ask you what 12 * 37 is, you can do it a lot faster with the CISC machine - even if the multiply instruction takes several times as long. The example is exaggerated, but you get the point - if an additional instruction can skip some of the work, it can arrive at the necessary result faster.  | 
	|
| Author: | Tony [ Tue Apr 07, 2009 2:43 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
I might have to rethink that first point. I think it depends on the specific hardware implementations. Yes, having hardware specialized to an instruction could make it faster. It also makes the hardware more complicated. By the time you add in instructions with multiple memory accesses, your design will start having serious enough problems where you get a performance improvement by taking an interpretation penalty (startup (negligable?) and hardware cost) and running RISC anyway. It's beginning to sound like the optimal solution is somewhere in between the two. At least hardware wise. Getting compilers up to speed for a brand new platform is a separate matter.  | 
	|
| Author: | DemonWasp [ Tue Apr 07, 2009 8:38 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
Compilers will play catchup to whatever platform you put them on. At this point, I'm not particularly convinced that any compiler is really doing all it can with x86/64; given the size of the basic instruction set and the size of the additions, it seems like the complexity of the side effects and exact operation of each instruction may hinder optimisation in compilers and in the CPU itself. Again, not an expert, so take my words for what little they're worth. Comparatively, I'm not convinced the sparc instruction set (warning: PDF) is very helpful either. It provides all the basic operations you may want, but leaves you giving even the most common of instructions manually (I'm thinking here of loop and function-call instructions, which occur relatively frequently, being expanded from 1 instruction to dozens). On the other hand, maybe the fact that it's simpler helps compilers produce faster code. You couldn't tell it from dealing with the Solaris-Sparc compiler though - I'm not sure if you've ever worked with it, but it's almost as bad as xlC and its variants; avoid if you can.  | 
	|
| Author: | Zeroth [ Tue Apr 07, 2009 7:09 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Lots of interesting information here. Throw in the fact that compiler options have grown far faster than our understanding of their effects when they interact... and most compilers aren't doing anywhere near the optimal best. But because mainstream compilers like gcc have what, 10,000 flags(!), finding the best sets of compiler options is an NP-complete problem (runtime of 10,000!). This is evident if you compile a basic recursive c program, like hanoi. If you use and compare the execution times of the three optimization flags(which is really just a macro for "expertly" picked flags), -O, -O2, -O3, you might find that the program performs as much as 2x WORSE on O3 compared to O. If you read the documentation, you see the disclaimers that these flags were picked because they generally work well for most programs, but there are some that they are in fact slower on. There is also a slowly growing research path, where you study how to apply machine learning methods to compiling optimization, so that compilers will in fact do their best.  | 
	|
| Author: | andrew. [ Tue Apr 07, 2009 7:24 pm ] | 
| Post subject: | Re: How does a computer compare numbers? | 
matt271 @ Mon Apr 06, 2009 8:25 pm wrote: this is pretty interesting. i didnt realize the cpu could do stuff like subtract two integers in 1 clock. 
 I think it depends on the processor and their instructions. Like CPU A may be 2GHz and CPU B may be 1.5GHz. You might think CPU A is faster, but it may take 2 cycles to perform an operation while CPU B does the same operation in one cycle. CPU B is actually faster.is clock the unit of speed of a cpu? like does 500MHz = 500 million clocks per sec? i remember somebody telling me that 500MHz means 500 million "operations" per second and that "1+1" would be *several* operations. i want to learn this kind of thing. im only first year cs and we havnt jumped into anything interesting yet  | 
	|
| Author: | Insectoid [ Tue Apr 07, 2009 8:02 pm ] | 
| Post subject: | RE:How does a computer compare numbers? | 
For example, some chips are designed to use few different kinds of logic gates. So while chip 1 does 1+1 in 10 NAND operations, another chip more expensively made might do it in 3 (I think 3) operations using all different gates.  | 
	|
| Author: | matt271 [ Wed Apr 08, 2009 1:53 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
if a cpu does int calculations in 1 clock, does it also do floats in 1 clock? like 3 - 2 is 1 clock is 3.2f - 3.4f done in 1 clock? what about doubles?  | 
	|
| Author: | Prabhakar Ragde [ Wed Apr 08, 2009 5:29 am ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Zeroth @ Tue Apr 07, 2009 7:09 pm wrote: But because mainstream compilers like gcc have what, 10,000 flags(!), finding the best sets of compiler options is an NP-complete problem (runtime of 10,000!). 
Try not to use phrases like "NP-complete" in ways that convey a misleading impression of the concepts involved, please. It suffices to say "There are many compiler optimizations that can be turned on or off by flags, and we don't know any better method of combining them than to try all possibilities." Many of the subtasks involved in optimization (for example, register allocation) are known to be NP-complete.  | 
	|
| Author: | DemonWasp [ Wed Apr 08, 2009 7:50 am ] | 
| Post subject: | RE:How does a computer compare numbers? | 
@matt271: That depends on the floating-point unit in a CPU. Modern processors have discrete floating point and integer arithmetic units - and often more than one of each, so that you can run math in parallel more easily - which are completely different in design and operation. Floating point arithmetic likely takes longer than integer arithmetic, but this can again depend on implementation.  | 
	|
| Author: | Zeroth [ Wed Apr 08, 2009 8:04 am ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Prabhakar Ragde @ Wed Apr 08, 2009 2:29 am wrote: Zeroth @ Tue Apr 07, 2009 7:09 pm wrote: But because mainstream compilers like gcc have what, 10,000 flags(!), finding the best sets of compiler options is an NP-complete problem (runtime of 10,000!). 
Try not to use phrases like "NP-complete" in ways that convey a misleading impression of the concepts involved, please. It suffices to say "There are many compiler optimizations that can be turned on or off by flags, and we don't know any better method of combining them than to try all possibilities." Many of the subtasks involved in optimization (for example, register allocation) are known to be NP-complete. (Off-topic) But it IS NP-complete. NP-complete is a short-hand description for the computability of the problem. You are just being pedantic. Isn't the register allocation problem able to be described nearly the same way? "Theres lots of ways to decide what to keep in the register, and what to store in memory, and we know no better way of figuring this out without trying all possible combinations." ?  | 
	|
| Author: | Prabhakar Ragde [ Wed Apr 08, 2009 8:40 am ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Zeroth @ Wed Apr 08, 2009 8:04 am wrote: (Off-topic) But it IS NP-complete. NP-complete is a short-hand description for the computability of the problem. You are just being pedantic. Isn't the register allocation problem able to be described nearly the same way? "Theres lots of ways to decide what to keep in the register, and what to store in memory, and we know no better way of figuring this out without trying all possible combinations." ? I am being accurate. GCC has a fixed number of flags. Running through the possibilities is a constant-time operation, albeit a large constant. Not every problem in NP can be poly-time reduced to some instance of finding optimal compiler flags for GCC, and so this problem is not NP-complete (unless P=NP, in which case every problem in P is trivially NP-complete). It's possible that one could define some sort of abstraction in which GCC is given an unbounded number of flags, but one would have to assign a precise meaning to each one, and the result would be quite far removed from reality. The problem of register allocation is defined on programs of unbounded size. We can demonstrate a poly-time reduction of any problem in NP to register allocation, so it is NP-complete. I know it is a common colloquialism to use "NP-complete" to mean "I don't know how to do it better than to try all possibilities", but this is not technically accurate, and it is especially misleading to do so when the set of possibilities is fixed.  | 
	|
| Author: | Zeroth [ Wed Apr 08, 2009 10:12 am ] | 
| Post subject: | Re: How does a computer compare numbers? | 
Then say that, rather than just harping that I used it wrong, without explaining why I used it wrong. With that explanation, yes, you're right. Sorry about that.  | 
	|
| Author: | md [ Wed Apr 08, 2009 6:16 pm ] | 
| Post subject: | RE:How does a computer compare numbers? | 
If your screwing around with GCC flags to optimize your code then likely there is something you should be doing well before that stage (or you are Gentoo Ricer)  | 
	|