The main tank, brimful with ideas. Enjoy them, discuss them, take them. - Of course, this is also the #1 place for new submissions!
By Solens_van
#7940
In today's world computers use the binary system to interpret data and instructions. These are translated by compilers and programs as sequences of 0's and 1's according to agreed codes, and in this form the computer handles them, meaning 1's a pulse of electricity and 0's a lack of it (in fact a pulse with much lower voltage). With the technology available at the time of major development of computers, I can understand that they just could make a difference between 0's and 1's by then, but nowadays electronic devices are finer and more accurate and I'm sure they can discern and create flows of different voltages of electricity.

My idea would be a computer that instead of binary system, uses the decimal system, by manipulating the data in the decimal system. Maybe this is a bit too difficult to achieve, since there must be some reason why no one sells one of these already (I guess I'm not the only one to think of it). So another possibility could be using other base but 2. Probably is a matter of reliability, because electronic components tend to age, or maybe is a matter of programming convenience (you must change your whole mindset for programming one of these). But I'm sure than any higher system (base 3, 4, 5, 6, 7...) would render faster computers.
By avatar72
#8226
This is a good idea, I think that in quantum computing they are trying to store variable states rather than just boolean states by looking into measuring atomic spin.

I have often thought that analogue devices that use varying waveforms may be able to make more precise, faster calculations, but the results would still need to be quantized digitally to be interpreted and I'm not sure how programable instructions would be represented.

But there is something a bit black and white about binary I agree. ;-D
By Jack Nobbz
#8342
I'm afraid you don't seem to understand how digital devices work.

I've just been typing out a lengthy explanation, but I've just cut the crap and I'll compress it down.

Essentially, it won't work for a variety of reasons, and the current system is really very good. It's very fast and using more discrete states (more than two) to express data would require a fundamental and impractical change to the whole operation of computers, and would actually cause drawbacks.

Good on you for thinking of it, though.
By jstr
#8836
There are computer languages and chips that use systems other than binary, especially in of artificial intelligence, ie neural nets in a computer chip. But even after the processing a binary result is required. Many of the earlier computing systems were analog, and were abandoned because they were not as practical as a binary system.

In some applications using more then 2 values can be an advantage and you will find they have used another system. A good example I think is frequency shift modulation on computer modems where the frequency is altered in a by a choice of several specific degrees, and there for being able to hold a few bits of data at each interval.

I think however for processing and logic the binary system is perfect. Also as I'm sure that you are well aware; rather then processing data serially, your computer processes it by parallel, ie for a bit width of 32, 32 to binary bits go in to your cpu giving it 2^32 = 4294967296 possible combinations that the cpu can recognize in one click. Of cause this number would be greater if you had a higher radix, but why would you bother when you can just increase the bit width.

Besides this some wonderfully neat algorithms exist for binary such as divide and conquer algorithms. Also binary has the ability of Boolean logic algebra.

Anyway that’s my two cents on why I don't think it would offer an improvement... but I am generally wrong about things :)

jesse
http://jesse.bur.st
#14099
it needs to binary, because if u imagine a block of memory cells, each cell is essentially just an on and off switch. on in 1 and off is 0, and this is the equivalent of one bit. when u read these bits in sequence it gives u (the cpu) meaningful results.
this is how computers (and even our minds) work. if i ask you to describe and object for example, you can answer it with a series of yes or no questions. (is it alive? is it solid? ... etc.)

what you were describing is completely different and a very complex way of computing things. and it can have a lot of potential, but the idea needs to be further completed.
#14128
Another reason not mentioned above is that the 'heat' that an analog system would generate would be overwhelming. The 'on' and 'off digital binary used in todays computers generate most of the heat when transitioning from one state to the other which is usually measured in nanoseconds -- not much time to create heat. Holding somewhere in the middle of that transition state (for a decimal variable) as an analog system would do would continuously generate heat.

However, your idea might be practical in future bio-computers -- time will tell.
#23425
I realize there are theoretical advantages to this concept (hypothetically); What may be more interesting is the use of a larger typology of operations; Supposedly in DOS predecessors there were three different operative commands (Read, Write, Print); If these original three are built into the function of computer chips, then it could be useful to re-define the basic architecture of computers for a wider array of equal-level commands;

For example, I conceive that Rule, Theme, Load, Save, Quit, Test, and Translate may be equivalent commands based on a typological system.

But of course it seems as though quantum computing could do much more than that in a few quick steps, if they were being serious about it. I think a lot can be done with aphorisms for instance, based on categories.

See for example, my upcoming book called The Dimensional Philosopher's Toolbox.

http://www.nathancoppedge.com
By JRE
#23642
Good post Solens_van. I like your sight into the future. Computers have become slower and slower as the years past. Yes there are advancements, but do the excessive data of those advancements there is far to much compiling and translating going on for the prehistoric 1s and 0s. When I went to computer school 12 years ago, I was told why computers use binary instead of decimal. I didn't accept it then and I don't accept it now. When the first binary computers (notice the plural. There had been many controbutions in different areas of computing) were developed, the binary base made sense. If you are inventing something and can not affectively achieve for desired gaol, most time the next best thing is a good choice. But, you can not tell me that the most brilliant minds of today cannot find a way to evolve from this especially sense the have been given a strong foundation. The answer is they can but won't. At least not yet. To evolve from binary to a decimal system would require a great deal more of man hours to achieve what we alread have today and of course it would be more exspensive. I draw your attention to a device that was invented to use water as the source of fuel in a carburator based engine. It was bought by the auto manufactuers and then barried. Most people have never heard of it. We have thesame situation with carbone based energy as opposed to natural. Not real attractive the the oilers and energy companies.

Just a note. The first black and white camreas used the 1(on) and 0(off) principle to destinguish between light and dark. Then someone invented the color camera.
By KLOD
#23680
Let me preface this by saying that I am by no means an expert and will endeavor therefore to separate my opinions from what I know to be true.

Decimal computing is not a new idea. The first I know of was ENIAC which was dated 1946. It was not only a binary computer, but one of the first computers of the modern world. It seems binary computing was only ever used in large computers built for research purposes and it has not been used in personal computing to my knowledge. There are reasons for this.

For one, decimal computers and binary computers function differently on the lowest level. The essential idea is that rather than simply identifying each tick as having or not having a charge, the processor needs be able to differentiate different magnitudes of charge. In the era that such computers were in (relatively) common use, our ability to do this autonomously was not as refined as it is today, so the difference between each level of magnitude had to be exaggerated, resulting in a much more intense drain on the machine's power supply than one would see in a binary computer. Though modern electrical circuits don't suffer from this as much, a decimal computer would undoubtedly still need more power than a binary computer.

There is also the fact that the devices we use today are based off of a world where every PC uses binary. I can't speak for how difficult it might be for the countless developers of these devices to transition, but it is something to consider. This same concern applies to software development as well.

IBM did release a few mainframes that use decimal computation and they are indeed fast. The increase in speed is hard to judge since it is reliant on the quality of the data being processed, but I'm sure it is nothing to sneeze at. Neither is $100k pricetag. We are not likely to see anything like this on the shelves at Fry's in the next few decades.
OFFSHORE
Finger Wheel Fidget Toy

A three inch wheel with rubber grip on the outside[…]

Makeover of backyard

Nothing to share but I definitely want to see the […]

Sticking Games

No fantasy but real sports. Professional level foo[…]

This must be very useful information. Thank you ve[…]