Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how developers of computers came to idea to use binary numbers? As I remember, early calculating machines used decimal system (e.g. Babbage's machine, IBM's tabulators). For example, did Konrad Zuse invent floating-point binary representation himself when developing his machine or there were previous works which described how numbers can be added in binary system using relays or tubes?

So, were there any works on floating-point binary numbers and implementing operations with them before 1938?

Also, did he invent logic gates or there also were previous works?



> I wonder how developers of computers came to idea to use binary numbers?

Essentially a logical consequence of electronics and the equipment they were working with at the time.

That 0 and 1 are not pure zero-volts and 5-voltsⁱ. There is a lower range that will definitely be ready as 0 and an upper range that will be read as 1, and hopefully the signal doesn't spend long between them. Also switching isn't absolutely instant, switches bounceʲ, debouncing methods smooth that but drag out the time to transition from 0->1 or 1->0.

All of this is try for multi-level electronics but even more so, making it easier and more reliable to work in binary than 3, 4, or more levels per bit.

Ternary (base 3) computers did exist, some using -1/0/+1ʰ and some using 0/1/2, and multi-level logic is used for some storage methods today, so binary isn't the be-all and end-all it is just most usually the most reliable & convenient tool for the job.

----

[i] or 3.3 or whatever other logic levels you are using

[j] a lot less of a problem with microscopic transistors but think back to old chunky relays & tubes

[h] not sure if that is just notation or there were negative voltages involved - I last read about these things properly at Uni ~2.5 decades ago


> All of this is try for multi-level electronics but even more so, making it easier and more reliable to work in binary than 3, 4, or more levels per bit.

You don't need many levels. IBM's tabulators used pulses to represent numbers in decimal (e.g. 6 pulses to transmit digit 6). That's why it is surprising that computer developers didn't reuse existing technology and went with a new system.


Above certain speeds, pulses based systems have timing issues. How many pulses can you cram in before they start to merge due to signal noise? Assuming serial communication transmitting 9 pulses is more than the 4 needed to represent 9 in binary (assuming 0 is represented by no pulse, not a single pulse with no signal at all indicating an error condition). If you allow variable length transmissions pure pulses become more efficient overall, but synchronising multiple signals potentially becomes more difficult.

So you fix the signal merging issue found in multi-level signals, but introduce other similar difficulties. Both sets of issues get more problematic as signalling speed increases.


Claude Shannon's master's thesis in 1937 is the one that proved the usefulness of Boolean algebra and binary for digital circuits.

https://harrymoreno.com/assets/greatPapersInCompSci/3.2_-_A_...

https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_a...


> I wonder how developers of computers came to idea to use binary numbers?

Logic gates[0]:

  A logic gate is an idealized or physical device
  that performs a Boolean function, a logical operation
  performed on one or more binary inputs that produces
  a single binary output. 

  ...

  In the real world, the primary way of building logic
  gates uses diodes or transistors acting as electronic
  switches.
0 - https://en.wikipedia.org/wiki/Logic_gate


Many early electronic computers were still using base-10. It afforded encoding schemes that could help detect errors stemming from component failures, leverage existing decimal vacuum tubes, and it simplified output to displays and printers since no conversion from base-2 was necessary.

https://en.wikipedia.org/wiki/Decimal_computer


Floats are just scientific notation in binary. Once you decide that's ehaty out want, the hard part is hw and error handling.


> I wonder how developers of computers came to idea to use binary numbers?

It’s the most robust when building actual physical computers. Only two states that need to be reliably distinguished. You can also sort-of relate it to an analog value going either up or down – the only two directions it can go.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: