>

Feynman lectures on computation pdf

Date published: 

     

Feynman Lectures on Computation - Free ebook download as PDF File .pdf), Text File .txt) or read book online for free. richard feynman one of the eminent. The Feynman Lectures on Computation were finally published in September , Feynman wanted me to help write up his lecture notes on computation. The Feynman Lectures on Physics, Desktop Edition Volume II This New Millennium Edition ushers in a new era for The Feynman Lectures on Physics ( FLP.

Author: ADAH CAPUANO
Language: English, Spanish, Arabic
Country: Gabon
Genre: Art
Pages: 472
Published (Last): 29.03.2016
ISBN: 271-3-44088-306-7
PDF File Size: 15.44 MB
Distribution: Free* [*Regsitration Required]
Uploaded by: CINDIE

7558 downloads 90549 Views 31.37MB ePub Size Report


computation: discrete logarithms and factoring. - Foundations of Computer. Science, Proceedings.,. 35th Annual Symposium on. PDF | The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and. Request PDF on ResearchGate | Feynman Lectures on Computation | From the Publisher:From to , the legendary physicist and teacher Richard.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy. See our Privacy Policy and User Agreement for details. Published on Mar 22, Pub Date: Jun Pages:

Not least I should thank Sandy Frey and Eric Mjolness, who both tried to bring some order to these notes before me. I am grateful to Geoffrey Fox, for trying to track down students who had taken the courses, and to Rod van Meter and Takako Matoba for sending copies of their notes.

Lectures computation feynman pdf on

I would also like to thank Gerry Sussman, and to place on record my gmtitude to the late Jan van de Sneepscheut, for their initial encouragement to me to undertake this task. I have tried to ensure that all errors of my understanding have been eliminated from the final version of these lectures. In this task I have been helped by many individuals. Rolf Landauer kindly read and improved Chapter 5 on reversible computation and thermodynamics and guided me patiently through the history of the subject.

Several colleagues of mine at Southampton also helped me greatly with the text: David Barron, Nick Barron and Mike Quinn, at Southampton, -and Tom Knight at MIT, were kind enough to read through the entire manuscript and, thanks to their comments, many errors and obscurities have been removed.

Needless to say, I take full responsibility for any remaining errors or confusions! I am also grateful to the Optical Society of America for pennission to reproduce, in j slightly modified form.

After Feynman died, I was greatly assisted by his wife Gweneth and a Feynman family friend, Dudley Wright, who supported me in several ways, not j least by helping pay for the lecture tapes to be transcribed.

I must also pay tribute to my co-editor, Robin Allen, who helped me restart the project after the j long legal wrangling about ownership of the Feynman archive had been decided, and without whom this project would never have seen the light of day. Gratitude j is also due to Michelle Feynman, and to Carl Feynman and his wife Paula, who have constantly supported this project through the long years of legal stalemate j and who have offered me every help.

A word of thanks is due to Allan Wylde, then Director of the Advanced Book Program at Addison-Wesley, who showed j great faith in the project in its early stages. Jeff Robbins and Heather Mimnaugh at Addison-Wesley Advanced Books have shown exemplary patience j with the inevitable delays and my irritating persistence with seemingly unimportant details. Lastly, I must record my gratitude to Helen Tuck for her faith in me and her conviction that I would finish the job - a belief I have not j always shared!

I hope sQe likes the result. Tony Hey j. When I produced the Lectures on Physics, some thirty years ago now, I saw them as an aid to students who were intending to go into physics. I also lamented the difficulties of cramming several hundred years' worth of science into just three volumes.

With these Lectures on Computation, matters are somewhat easier, but only just. Firstly, the lectures are not aimed solely at students in computer science, which liberates me from the shackles of exam syllabuses and allows me to cover areas of the subject for no more reason than that they are interesting.

Secondly, computer science is not as old as physics; it lags by a couple of hundred years. However, this does not mean that there is significantly less on the computer scientist's plate than on the physicist's: So there is still plenty for us to cover.

Computer science also differs from physics in that it is not actually a science. It does not study natural objects. Neither is it, as you might think, mathematics; although it does use mathematical reasoning pretty extensively.

Rather, computer science is like engineering it is all about getting something to do something, rather than just dealing with abstractions as in pre-Smith geology!. Today in computer science we also need to "go down into the mines" - later we can generalize. It does no harm to look at details first. But this is not to say that computer science is all practical, down to eartt bridge-building. Far from it.

Computer science touches on a variety of deer issues. It has illuminated the nature of language, which we thought WI understood: Compute science people spend a lot of their time talking about whether or not man ; merely a machine, whether his brain is just a powerful computer that might or day be copied; and the field of 'artificial intelligence' - I prefer the ter 'advanced applications' - might have a lot to say about the nature of 're8. I William Smith was the father of modern geology; in his work as a canal and mining engineer observed the systematic layering of the rocks, and recognized the significance of fossils as a me: AI sciel Of course, we might get useful ideas from studying how the brain works, but we must remember that automobiles do not have legs like cheetahs nor do airplanes flap their wings!

We do not need to study the neurologic minutiae of living things to produce useful technologies; but even wrong theories may help in designing machines. These lectures are about what we can and can't do with machines today, and why. I have attempted to deliver them in a spirit that should be recommended to all students embarking on the writing of their PhD theses: In very broad outline, after a brief introduction to some of the fundamental ideas, the next five chapters explore the limitations of computers - from logic gates to quantum mechanics!

As far as is possible, this second volume will contain articles on 'advanced applications' by the same experts who contributed to Feynman's course but updated to reflect the present state of the art. They can add millions of numbers in the twinkling of an eye. They can outwit chess grandmasters.

Feynman Lectures on Computation | Richard Feynman | Multiplication

They can guide weapons to their targets. They can book you onto a plane between a guitar- strumming nun and a non-smoking physics professor. Some can even play the bongoes. That's quite a variety! So if we're going to talk about computers, we'd better decide right now which of them we're going to look at, and how. In fact, we're not going to spend much of our time looking at individual machines.

The reason for this is that once you get down to the guts of computers you find that, like people, they tend to be more or less alike. They can differ in their functions, and in the nature of their inputs and outputs - one can produce music, another a picture, while one can be set running from a keyboard, another by the torque from the wheels of an automobile - but at heart they are very similar.

We will hence dwell only on their innards. What does the inside of a computer look like? Crudely, it will be built out of a set of simple, basic elements. These elements are nothing special - they could be control valves, for example, or beads on an abacus wire - and there are many possible choices for the basic set.

All that matters is that they can be used to build everything we want. How are they arranged? Again, there will be many possible choices; the relevant structure is likely to be determined by considerations such as speed, energy dissipation, aesthetics and what have you. Viewed this way, the variety in computers is a bit like the variety in houses: At heart they are very similar.

Let us get a little abstract for the moment and ask: It's a deep question. The answer again is that, up to a point, it doesn't matter. Anyway, you can see that computer science has more than just technical interest. I have attempted to deliver them in a spirit that should. In very broad outline, after a brief introduction to some of the fundamental ideas, the next five chapters explore the limitations of computers from logic gates to quantum mechanics!

As far as is pOSSible, this second volume will contain articles on 'advanced applications' by the same experts who contributed to Feynman's course but updated to reflect the present state of the art. These elements are nothing special they could be control valves, for example, or beads on anabacus wire - and there are many possible choices for the basic set. This, loosely, is the basis of the great principle of "Universality".

You cry. My pocket calculator can't simulate the red spot on Jupiter like a bank of Cray supercomputers! Well, yes it can: Generally, suppose we have two computers A and B, and we know all about A the way it works,its "state transition rules" and what-not. Assume that machine B is capable of merely describing the state of A.

We can then use B to simulate the running of A by describing its successive transitions; B will, in other words, be mimicking A. It could take an eternity to do this if B is very crude and A very sophisticated, but B will be able to do whatever A can, eventually.

We will prove this later in the course by designing such a B computer, known as a Turing machine. Let us look at universality another way. Language provides a useful source of analogy. Let me ask you this: Of course, most languages, at least in the West, have a simple word for this; we have "automobile", the English say "car", the French "voiture", and so on. However, there will be some languages which have not evolved a word for "automobile", and speakers of such tongues would have to invent some, possibly long and complex, description for what they see, in terms of their basic linguistic elements.

Yet none of these descriptions is inheren,tly "better" than any of the others: We needn't introduce democracy just at the level of words. We can go down to the level of alphabets.

What, for example, is the best alphabet for English? That is, why stick with our usual 26 letters? Everything we can do with these, we can do with three symbols the Morse code, dot, dash and space; or two a Baconian Cipher, with A through Z represented by five-digit binary numbers.

So we see that we can choose our basic set of elements with a lot of freedom, and all this choice really affects is the efficiency of our language, and hence the sizes of our books: Going back to computing, universality in fact states that the set of complex tasks that can be performed using a IIsufficient" set of basic procedures is independent of the specific, detailed structure of the basic set.

This instructing has to be exact and unambiguous. In life, of course, we never tell each other exactly what we want to say; we never need to, as context, body language, familiarity with the speaker, and so on, enable us to "fill in the gaps" and resolve any ambiguities in what is said.

Computers, however, can't yet "catch on" to what is being said, the way a person does. They need to be told in excruciating detail exactly what to do. Perhaps one day we will have machines that can cope with approximate task descriptions, but in the meantime we have to be very prissy about how we tell computers to do things.

Let us examine how we might build complex instructions from a set of rudimentary elements. Obviously, if an instruction set B say is very simple, then a complex process is going to take an awful lot of description, and the resulting "programs" will be very long and complicated. We may, for instance, want our computer to carry out all manner of numerical calculations, but find ourselves with a set B which doesn't include multiplication as a distinct operation.

If we tell our machine to multiply 3 by 35, it says "what? However, it will clearly clarify the writing of B-programs if we augment the set B with a separate "multiply" instruction, defined by the chunk of basic B instructions that go to make up multiplication. Then when we want to multiply two numbers, we say "computer, 3 times 35", and it now recognizes the word "times" - it is just a lot of adding, which it goes off and does.

The machine breaks these compound instructions down into their basic components, saving us from getting bogged down in low level concepts all the time. Complex procedures are thus built up stage by stage. A very similar process takes place in everyday life; one replaces with one word a set of ideas and the connections between them. In referring to these ideas and their interconnections we can then use just a single word, and avoid having to go back and work through all the lower level concepts.

Computers are such complicated objects that simplifying ideas like this are usually necessary, and good design is essential if you want to avoid getting completely lost in details. We shall begin by constructing a set of primitive procedures, and examine how to perform operations such as adding two numbers or transferring two numbers from one memory store to another. We will then go up a level, to the next order of complexity, and use these instructions to produce operations like multiply and so on.

We shall not go very far in this hierarchy. If you want to see how far you can go, the article on Operating Systems by PJ. Denning and I lUI'll. Brown Scientific American, September , pp. This goes from levell, that of electronic circuitry registers, gates, buses - to number 13, the Operating System Shell, which manipulates the user programming environment.

By a hierarchical compounding of instructions, basic transfers of 1's and O's on level one are transformed, by the time we get to thirteen, into commands to land aircraft in a simulation or check whether a forty digit number is prime. We will jump into this hierarchy at a fairly low level, but one from which we can go up or down. Also, our discussion will be restricted to computers with the so-called "Von Neumann architecture".

Don't be put off by the word "architecture"; it's just a big word for how we arrange things, only we're arranging electronic components rather than bricks and columns.

Von Neumann was a famous mathematician who, besides making important contributions to the foundations of quantum mechanics, also was the first to set out clearly the basic principles of modem computers'. We will also have occasion to examine the behavior of several computers working on the same problem, and when we do, we will restrict ourselves to computers that work in sequence, rather than in parallel; that is, ones that take turns to solve parts of a problem rather than work simultaneously.

All we would lose by the omission of "parallel processing" is speed, nothing fundamental. We talked earlier about computer science not being a real science. Now we have to disown the word "computer" toot You see, "computer" makes us think of arithmetic - add, subtract, multiply, and so on and it's easy to assume that this is all a computer does.

In many ways, a computer is reminiscent of a bureaucracy of file clerks, dashing back and forth to their filing cabinets, taking files out and putting them back, scribbling on bits of paper, passing notes to one another, and so on; and this metaphor, of a clerk shuffling paper around in an office, will be a good place to start to get some of the basic ideas of computer structure across. We will go into this in some detail, and the impatient among you might think too much detail, but it is a perfect model for communicating the essentials of what a computer does, and is hence worth spending some time on.

These will be discussed by invited "experts" in a companion volume. Let's suppose we have a big company, employing a lot of salesmen. An awful lot of information about these salesmen is stored in a big filing system somewhere, and this is all administered by a clerk. We begin with the idea that the clerk knows how to get the information out of the filing system.

The data is stored on cards, and each card has the name of the salesman, his location, the humber and type of sales he has made, his salary, and so on and so forth.

Now suppose we are after the answer to a specific question: So how does our file clerk find the total sales in California? Here's one way he could do it:. Obviously you have to keep this up until you've gone through all the cards. Now let's suppose we've been unfortunate enough to hire particularly stupid clerks, who can read, but for whom the above instructions assume too much: We need to help them a little bit more.

Let us invent a "total" card for our clerk to use. He will use this to keep a running total in the following way: Take out next "sales" card If California, then Take out "total" card Add sales number to number on card Put "total" card back Put "sales" card back Take out next "sales" card and repeat. This is a very mechanical rendering of how a crude computer could solve this adding problem.

Obviously, the data would not be stored on cards, and the machine wouldn't have to "take out a card" - it would read the stored information from a register. It could also write from a register to a "card" without physically putting something back. Now we're going to stretch our clerk! Let's assume that each salesman receives not only a basic salary from the company, but also gets a little on commission from sales.

To find out how much, we multiply his sales by the appropriate percentage. We want our clerk to allow for this. Now he is cheap and fast, but unfortunately too dumb to multiplyl. If we tell him to multiply 5 by 7 he says "what? To do this, we will exploit the fact that there is one thing he. We'll work in base two. As you all probably know. We will assume that even OUr clerk can remember these; all he needs are "shift" and "carry" operations, as the following example makes clear:.

In binary: So as long as our clerk can shift and carry he can, in effect, multiply. He does it very stupidly, but he also does it very quickly, and that's the point of all this: It can perform very many millions of simple operations a second and is just like a very fast dumb file clerk. It is only because it is able to do things so fast that we do not notice that it is doing things very stupidly. Interestingly enough, neurons in the brain characteristically take milliseconds to perform elementary operations, which leaves us with the puzzle of why is the brain so smart?

Computers may be able to leave brains standing when it comes to mUltiplication, but they have. To go further, we need to specify more precisely our basic set of operations. One of the most elementary is the business of transferring information from the cards our clerk reads to some sort of scratch pad on which he can do his arithmetic:. All we have done is to define the instruction "take card X" to mean copying the information on card X onto the pad, and similarly with "replace card Y".

Next, we want to be able to instruct the clerk to check if the location on card X was "California". He has to do this for each card, so the first thing he has to do is be able to remember "California" from one card to the next. One way to help him do this is to have California written on yet another card C so that his instructions are now:. We then tell him that if the contents match, do so and so, and if they don't, put the cards back and take the next ones.

Keeping on taking out and putting back the California card seems to be a bit inefficient, and indeed, you don't have to do that; you can keep it on the pad for a while instead. This would be better, but it all depends on how muchroom the clerk has on his pad and how many pieces of information he needs to keep.

We can keep on breaking the clerk's complex tasks down into simpler, more fundamental ones. How, for example, do we get him to look at the "location" part of a card from the store? One way would be to burden the poor guy with yet another card, on which is written something like this:. Each sequence of digits is associated with a particular piece of information on the card: The clerk zips through this numeric list until he hits a set of l' s, and then reads the information next to them.

In our case, the is lined up with California. This sort of location procedure is actually used in computers, where you might use a so-called "bitwise AND" operation we'll discuss this later. This little diversion was just to impress upon you the fact that we need not take any of our clerk's skills for granted - we can get him to do things increasingly stupidly. Let's take a look at the clerk's scratch pad.

We haven't yet taught the clerk how to use this, so we'll do that now. We will assume that we can break down the instructions he can carry out into two groups. Firstly, there is a core "instruction set" of simple procedures that comes with the pad add, transfer, etc.

These are in the hardware: If you like, they reflect the clerk's basic abilities. Then we have a set which is specific to the task, say calculating a salesman's commission. The elements of this set are built out of the instructions in the core set in ways we have discussed, and represent the combinations of the clerk's talents that will be required for him to carry out the task at hand.

The first thing we need to get the clerk to do is do things in the right order, that is, to follow a succession of instructions. We do this by designating one of the storage areas on the pad as a "program counter". So he gets the instruction and stores it on his pad in an area which we call the "instruction register". Before he carries out the instruction, however, he prepares for the next one by incrementing the program counter; he does this simply by adding one to it. Then he does whatever the instruction in the register tells him to do.

Using a bracketed notation where 0 means "contents of' - remember this, as we will be using it a lot - we can write this sequence of actions as follows The clerk will also need some temporary storage areas on the pad; to enable him to do arithmetic, for example. These are called registers, and give him a place to store something while he goes and finds some other number. Even if you an!

Everything must be done in sequence and the registers allow us to organize things. They usually have names; in our case we will have four, which we call registers A, B and X, and the fourth, C, which is special - it can only store one bit of data, and we will refer to it as the "carry" register.

We could have more or fewer registers -: We choose to follow the so-called "right to left" convention utilized in standard programming languages. So our clerk knows how to find out what he has to do, and when. Let's now look at the core instruction set for his pad. The first kind of instruction concerns the transfer of data from one card to another. For example, suppose we have a memory location M on the pad. We want to have an instruction that transfers the contents of register A into M:.

Feynman Lectures on Computation

M, incidentally, is not necessarily designed for temporary storage like A. We must also have analogous instructions for register B:.

Register X we will use a little differently. We shall allow transfers from B to X and X to B:. In addition, we need to be able to keep tabs on, and manipulate. This is obviously necessary: In fact, we'll keep it in register X. Thus we add the transfer instructions:. Next, we need arithmetical and logical operations. The most basic of these is a "clear" instruction:. This means, whatever is in A, forget it, wipe it out. Then we need an Add operation: This means that register A receives the sum of the contents of B and the previous contents of A.

We also have a shift operation, which will enable us to do multiplication without having to introduce a core instruction for it:. The fIrst merely moves all the bits in A one place to the left. If this shift causes the leftmost bit to overflow we store it in the carry register C. We can also shift our number to the right; I have no use for this in mind, but it could come in handy! The next instructions are logical ones. We will be looking at these in greater detail in the next chapter, but I will mention them here for completeness.

There are three that will interest us: Each is a function of two digital "inputs" x and y. If both inputs are 1, then AND gives you 1; otherwise it gives you zero. As we will see, the AND operation turns up in binary addition, and hence multiplication;.

Feynman lectures on computation

The result of acting on a pair of variables with an operator such as AND is often summarized in a "truth table" Table 1. Our other two operators can be described in similar terms. XOR, or the "exclusive or", is similar to OR, except it gives zero if both A and B are one; in the binary addition of x and y, it corresponds to what you get if you add x to y and ignore any carry bits.

A binary add of 1 and 1 is 10, which is zero if you forget the carry. We can introduce the relevant logical symbols:.

Two more operations that it turns out are convenient to have are the instructions to increment or decrement the contents of A by one:. However, we want to be able to do as much as possible, so we can bring in other instructions. One other that will be useful is one that allows us to put a data item directly into a register.

For example, rather than writing California on a card and then transferring from card to pad, it would be convenient to be able to write California on the pad directly. Thus we introduce the "Direct Load" instruction: There is one class of instructions that it is vital we add: A "jump to Z' is basically an instruction for the clerk to look in instruction location Z; that is, it involves a change in the value of the program counter by more than the usual increment of one.

This enables our clerk to leap from one part of a program to another. There are two kinds of jumps, "unconditional" and "conditional". The unconditional jump we have touched on above:. The freedom given by this conditional instruction will be vital to the whole design of any interesting machines. There are many other kinds of jump we can add.

Sometimes it turns out to be convenient to be able to jump not only to a definite location but to one a specific number of steps further on in the program. We can therefore introduce jump instructions that add this number of steps to the program counter:. Finally, there is one more command that we need; namely, an instructio! J that tells our clerk to quit:. With these instructions, we can now do anything we want and I will suggest some problems for you to practice on below.

Before we do that, let us summarize where we are and what we're trying to do. In a simple computer there are only a few registers; more complex ones have more registers, but the concepts are basically the same, just scaled up a bit. It is worth looking at how we represent the instructions we considered above. In our particular case the instructions contain two pieces: For example, one of the instructions was "put the contents of memory Minto register A".

The computer doesn't speak English, so we have to encode this command into a fonn it can understand; in other words, into a binary string. This is the opcode, or instruction number, and its length clearly detennines how many different instructions we can have.

The second part of the instruction is the instruction address, which tells the computer where to go to find what it has to load into A; that is, memory address M. Some instructions, such as "clear A", don't require an address direction. This is the first and most elementary step in a series of hierarchies, We want to be able to maintain such ignorance consistently.

There is one feature that we have so far ignored completely. Our machine as described so far would not work because we have no way of getting numbers in and out. We must consider input and output. One quick way to go about things would be to assign a particular place in memory, say address , to be the input, and attach it to a keyboard so that someone from outside the machine could change its contents.

Now there are two ways in which you can increase your understanding of these issues. One way is to remember the general ideas and then go home and try to figure out what commands you need and make sure you don't leave one out. Make the set shorter or longer for convenience and try to understand the tradeoffs by trying to do problems with your choice.

This is the way I would do it because I have that kind of personality! It's the way I study to understand something by trying to work it out or, in other words, to understand something by creating it. Not creating it one hundred percent, of course; but taking a hint as to which direction to go but not remembering the details.

These you work out for yourself. The other way, which is also valuable, is to read carefully how someone else did it. I find the first method best for me, once I have understood the basic idea.

If I get stuck I look at a book that tells me how someone else did it. I tum the pages and then I say "Oh, I forgot that bit", then close the book and carry on. Fimilly, after you've figured out how to do if you read how they did it and find out how dumb your solution is and how much more clever and efficient theirs is!

But this way you can understand the cleverness of their ideas and have a framework in which to think about the problem. When I start straight off to read someone else's solution I find it boring and uninteresting, with no way of putting the whole picture together.

At least, that's the way it works for me! Throughout the book, I will! If they're too hard, fine. Some of them are pretty difficult! But you might skip them thinking that, well, they've probably already been done by somebody else; so what's the point? Well, of course they've been done!

But so what? Do them for the ftm of it. That's how to learn the knack of doing things when you have to do them. Let me give you an example. Suppose I wanted to add up a series of numbers,. No doubt you know how to do it; but when you play with this sort of problem as a kid, and you haven't been shown the answer Then, as you go into adulthood, you develop a certain confidence that you can discover things; but if they've already been discovered, that shouldn't bother you at all.

What one fool can do, so can another, and the fact that some other fool beat you to it shouldn't disturb you: Most of the problems I give you in this book have been worked over many times, and many ingenious solutions have been devised for them.

But if you keep proving stuff that others have done, getting confidence, increasing the complexities of your solutions for the fun of it - then one day you'll tum around and discover that nobody actually did that one!

And that's the way to become a computer scientist. I'll give you an example of this from my own experience.

On computation lectures pdf feynman

Above, I mentioned summing up the integers. Now, many years ago, I got interested in the generalization of such a problem: I wanted to figure out formulae for the sums of squares; cubes, and higher powers, trying to find the sum of m things each up to the nth power.

And I cracked it, finding a whole lot of nice relations. When I'd finished, I had a formula for each sum in terms of a number, one for each n, that I couldn't find a formula for.

I wrote these numbers down, but I couldn't find a general rule for getting them. Very shocking! And fun. Anyway, I discovered later that these numbers had actually been discovered back in So I had made it up to ! They were called "Bernoulli Numbers". The formula for them is quite complicated, and unknown in a simple sense. I had a "recursion relation" to get the next one from the one before, but I couldn't find an arbitrary one.

So I went through life like this, discovering next something that had first been discovered in , then something from But I get so much fun out of doing it that I figure there must be 'others out there who do too, so I am giving you these problems to enjoy yourselves with.

Of course, eyeryone enjoys themselves in different ways. I would just urge you not to be intimidated by them, nor put off by the fact that they've been done. You're unlikely to discover something' new without a lot of practice on old stuff, but further, you should get a heck of a lot of fun out of working out funny relations and interesting things.

Also, if you read what the other fool did, you can appreciate' how hard it was to do or not , what he was trying to do, what his problems were, and so forth. It's much easier to understand things after you've fiddled with them before you read the solution.

So for all these reasons, I suggest you have a go. Problem 1. Would you advise the management to hire two clerks to do the job quicker? If so, how would you use them, and could you speed up the calculation by a factor of two? Can you generalize your solution to K, or even 2K clerks? What kinds can they apparently not? This single file clerk sits there all day long working away like a fiend, taking cards in and out of the store like mad. Ultimately, the speed of the whole machine is determined by the speed at which the clerk - that is, the central processor - can do these operations.

Let's see how we can maybe improve the machine's performance. Suppose we want to compare two n-bit numbers, where n is a large number like ; we want to see if they're the same. The easiest way for a single file clerk to do this would be to work through the numbers, comparing each digit in sequence. Obviously, this will take a total time proportional to n, the number of digits needing checking.

But suppose we can hire n file clerks, or 2n or perhaps 3n: Now, it turns out that by increasing the number of file clerks we can get the comparison-time down to be proportional to log2 n. Can you see how? See if you can figure out a way of adding two n-bit numbers in "log nit time.

This is more difficult because you have to worry about the carries! Lectures on Antitrust Economics Cairoli Lectures. Lectures on characteristic classes.

Lectures on Mechanics. Ten lectures on wavelets. Ten Lectures On Wavelets. Lectures on Quantum Groups. Lectures on Morse homology. Lectures on modular correspondences. Lectures on theoretical physics. Lectures on Arakelov Geometry. Upcoming SlideShare. Like this presentation? Why not share! An annual anal Embed Size px. Start on. Show related SlideShares at end. WordPress Shortcode. Published in: Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this.

No Downloads. Views Total views. Actions Shares.

Related Documents


Copyright © 2019 vitecek.info.