Why does 8 bits make a byte




















Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. What is the history of why bytes are eight bits? Ask Question. Asked 9 years, 11 months ago. Active 2 years, 2 months ago. Viewed 67k times. Improve this question. DarenW DarenW 4, 2 2 gold badges 20 20 silver badges 41 41 bronze badges.

This might be one of those questions where we can't answer it better than good old Wikipedia. So why would you prefer 12 bits to 8?

Is the last sentence in jest? A bit byte would be inconvenient because it's not a power of 2. Memory and registers weren't so cheap back then, so 8 bits was a good compromise, compared to 6 or 9 fractions of a bit word. Also, address calculations are a heck of a lot simpler with powers of 2, and that counts when you're making logic out of raw transistors in little cans.

Using word sizes that were powers of 2 were not so important in the "early days". Show 5 more comments. Active Oldest Votes. Improve this answer. Jerry Coffin Jerry Coffin I thought I read somwehere 8 came from the 7bit ASCII plus a validation bit that was needed because the nearly transmission protocols were not as loss-less as the designers wanted LokiAstari, Yes, it's called a parity bit, and can be used for crude forms of error detection or recovery.

Wikipedia: Parity bit — user. MSalters: Primarily that it has arguably "stunted" the evolution of hardware. The PC has largely stopped that, and taken an architecture that wasn't even particularly progressive when it was new, and preserved it for decades.

Current character sets aren't 16 or 32 bits, nor do Java and Windows use such. The current character set is Unicode, which is needs 21 bits to map directly. Current software uses encodings based on 8 UTF-8 , 16 UTF or 32 UTF bit code units, combining multiple code units to form a single code point where necessary, but those bits sizes are a consequence of the hardware, not of the character set.

Show 10 more comments. Jay Elston 2, 20 20 silver badges 30 30 bronze badges. Parity was very important when dealing with early memory. Even after moving to 8 bit data bytes, there were memory chips with 9 bits to allow for parity checking.

This is an interesting assertion. Is there any historical data to support the idea? Add a comment. You need to do some reading on the history of computing!!

That info came from the Wikipedia page that I linked. Like I said, I'm not a hardware expert and I'm certainly not a historian, but if you feel that I'm so far off, you might want to go update that wikipedia page. I guess it would help if I didn't screw up the link as I was entering in. I also I apologize for saying "first CPU". Since I was quoting the wiki page, I should have said "first microprocessor". That's what I meant. Things like ASCII that have characters not extended were created around the byte because of the combinations, which was enough as wattly said.

I guess once things just got based on that, they stuck around whether we want them or not. The stop bits in serial communications are also from this era - at the end of each letter you sent 1 or 2 meaningless bits to allow the mechanical wheels to get back into position. When mainframe technology borrowed this tty tech wholesale for character representation an 8 bit word size became a handy thing to put letters into.

What part of didn't you understand in the Jargon File's definiton of byte? Byte predates the microprocessor revolution by roughly 15 years. It's like kids these days don't get that computers used to consume rooms, if not buildings, and didn't use integrated circuits. The term "byte" harks back to the dark ages of computing.

It's nowhere near recently coined. HTM Since memory address decorders are expensive, early computers generally had memory addressable in larger units than characters: The PDP-6 and its successors, the DECsystem and , which were the foundation for the early Arpanet, used a bit word, for example; CDC machines like the and Cyber 76 used a bit word. How to represent a "character" on these machines was largely a matter of choice in the programming.

There was a very popular early character-addressable machine, the IBM , but it too used six bits per character - that is, per addressable location. Then the ASCII character set was developed and specified 95 printing characters counting the space and 33 control characters; this fit in seven bits. But seven bits per character would have been absurdly difficult to work with. I didn't mean to imply that it was coined recently.

I thought the OP was asking why a byte was the smallest unit of access instead of bits. EDIT: I imagine the fact that nibbles make converting binary to hex and vice versa much easier also made bytes more popular to work with.

Some folks took it seriously. I thought of it as a spoof. I wonder what he though when he started seeing them DOS character pages.

Damn straight! Octets in networking all the way! Addressing is a whole other can of worms. Well, actually, the really early teletypes used FIVE bits per character. An escape character shifted the unit from the uppercase only alphabetic to the numeric type box. This was commonly referred to as the "Baudot" code. ASCII at that time had not defined anything to do with character codes , so seven bits were enough.

The transmission codes used a leading start bit, seven data bits, a parity bit, and two stop bits; the punched paper tape used was also eight-level. An ASR could be optioned to send the parity bit as "space", "mark", even, or odd parity, and would punch its tape the same way.

Hm, he was asking both. Most programs deal with individual bits relatively infrequently; it is far more common to deal with collections of bits that represent a character, or an integer, or an address, or a floating point number. The machine architectures encourage the "aggregate" view in their registers. For example, x86 has about seven or nine or ten general-purpose registers, depending on your definition.

I don't consider the instruction pointer or the stack pointer "general purpose;" it's not as though you can use them for anything else! These are 32 bits wide. It would be horribly awkward and slow to have to do these things a bit at a time.

We do deal with individual bits now and then - but not often. Very much so, and this again was largely due to IBM's use of hexadecimal for internal representations of memory. On a machine where the bits per word are divisible by six, octal representations make more sense.

DEC was in that mold for a long time: bit words on the PDP-8, bit words on the -6, , and This machine did use 8-bit character codes. This have not really survived merely because they are not as easy to manipulate. You certainly can't split an odd number in half, which means if you were to divide bytes, you would have to keep track of the length of the bitstring.

Finally, 8 is also a convenient number, many people psychologists and the like claim that the human mind can generally recall only things immediately without playing memory tricks.

I think I'm getting off track now Banfa 9, Expert Mod 8TB. It isn't always 8 sometimes it is 7 or 9. This is platform dependent. In the file limits. Post Reply. Similar topics PHP. NET Framework. Comparing byte arrays. Allocation of 1 byte? Visual Basic. Copy bits of Byte array to Int Byte ordering and array access.

Stocks last post: by.



0コメント

  • 1000 / 1000