What is 0 and 1 in computer?
The Zeroes and Ones in a computer are the two possible values of a bit, the smallest unit of data in a computer. These are two possible states, also called low and high or true and false.
While processing information, this value decides which paths of the circuit current can flow through and which paths are blocked. For storage, the value is decided by the orientation of tiny magnets (tapes, HDDs, etc) or by whether a charge is present or not (RAM, SSDs, etc).
How can information be stored in ones and zeroes?
Here’s how.
Since there are only 2 possible values for a binary digit, it looks rather different from our standard decimal system that uses Ten different values (0–9). A binary digits’ weight increases by powers of 2, rather than by powers of 10. In a digital numeral, the digit furthest to the right is the "ones" digit; the next digit to the left is the "twos" digit; next comes the "fours" digit, then the "eights" digit, then the "16s" digit, then the "32s" digit, and so on. The decimal equivalent of a binary number can be found by summing all the digits.
For other characters and letters, it gets slightly more complicated. The computer compares the values to previously defined tables (ASCII for example) and decodes the values accordingly.
These images might be able to explain it better.

The zero and one in computer are binary code.A binary code represents text, computer processor instructions, or other data using any two-symbol system, but often the binary number system's 0 and 1. The binary code assigns a pattern of binary digits (bits) to each character, instruction, etc.
In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings. Those methods may use fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notations notation. There are many character sets and many character encodings for them.
A bit string, interpreted as a binary number, can be translated into a decimal number For example, the lower case a, if represented by the bit string 01100001 (as it is in the standard ASCII code), can also be represented as the decimal number 97.
No comments: