Manual all8088
Explore Ebooks. Bestsellers Editors' Picks All Ebooks. Explore Audiobooks. Bestsellers Editors' Picks All audiobooks.
Explore Magazines. Editors' Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced. Explore Documents. Uploaded by Bilal Khan. Document Information click to expand document information Description: Microcroprossor. Did you find this document useful? Is this content inappropriate? Report this Document. Description: Microcroprossor. Flag for inappropriate content. For Later. Related titles. Carousel Previous Carousel Next. NetWorker 8. Jump to Page. Search inside document.
Thanks to Jack Tseng for teaching me a lot about graphics hardware, and even more about how much difference hard work can make. And, of course, thanks to Shay and Emily for their generous patience with my passion for writing and computers. This book is devoted to a topic near and dear to my heart: writing software that pushes PCs to the limit. Given run-of-the-mill software, PCs run like the pound-weakling minicomputers they are.
Give them the proper care, however, and those ugly boxes are capable of miracles. The key is this: Only on microcomputers do you have the run of the whole machine, without layers of operating systems, drivers, and the like getting in the way. Is performance still an issue in this era of cheap computers and super-fast Pentium computers?
You bet. Impossible, you say? My point is simply this: PCs can work wonders. Before we can create high-performance code, we must understand what high performance is. The objective not always attained in creating high-performance software is to make the software able to carry out its appointed tasks so rapidly that it responds instantaneously, as far as the user is concerned.
In other words, high-performance code should ideally run so fast that any further improvement in the code would be pointless. Notice that the above definition most emphatically does not say anything about making the software as fast as possible.
It also does not say anything about using assembly language, or an optimizing compiler, or, for that matter, a compiler at all. You do indeed need tools to build a house, but any of many sets of tools will do.
You also need a blueprint, an understanding of everything that goes into a house, and the ability to use the tools. Likewise, high-performance programming requires a clear understanding of the purpose of the software being built, an overall program design, algorithms for implementing particular tasks, an understanding of what the computer can do and of what all relevant software is doing— and solid programming skills, preferably using an optimizing compiler or assembly language.
The optimization at the end is just the finishing touch, however. In the early s, as the first hand-held calculators were hitting the market, I knew a fellow named Irwin.
He was a good student, and was planning to be an engineer. Being an engineer back then meant knowing how to use a slide rule, and Irwin could jockey a slipstick with the best of them. In fact, he was so good that he challenged a fellow with a calculator to a duel—and won, becoming a local legend in the process.
When you get right down to it, though, Irwin was spitting into the wind. In a few short years his hard-earned slipstick skills would be worthless, and the entire discipline would be essentially wiped from the face of the earth.
Irwin had basically wasted the considerable effort and time he had spent optimizing his soon-to-be-obsolete skills. What does all this have to do with programming? Making rules is easy; the hard part is figuring out how to apply them in the real world. In other words, the program will add each byte in a specified file in turn into a bit value. How are we going to generate a checksum value for a specified file?
The logical approach is to get the file name, open the file, read the bytes out of the file, add them together, and print the result. Most of those actions are straightforward; the only tricky part lies in reading the bytes and adding them together. It would be convenient to load the entire file into memory and then sum the bytes in one loop. Sounds good, eh? Listing 1. The code is compact, easy to write, and functions perfectly—with one slight hitch:. Table 1. Execution times are given for Listing 1.
Listings 1. To drive home the point, Listings 1. The assembly language implementation is indeed faster than any of the C versions, as shown in Table 1.
WP , bytes in size , as compiled in the small model with Borland and Microsoft compilers with optimization on opt and off no opt. The lesson is clear: Optimization makes code faster, but without proper design, optimization just creates fast slow code.
Well, then, how are we going to improve our design? Just why is Listing 1. In a word: overhead. The C library implements the read function by calling DOS to read the desired number of bytes. I figured this out by watching the code execute with a debugger, but you can buy library source code from both Microsoft and Borland. That means that Listing 1. For starters, DOS functions are invoked with interrupts, and interrupts are among the slowest instructions of the x86 family CPUs.
Then, DOS has to set up internally and branch to the desired function, expending more cycles in the process. Finally, DOS has to search its own buffers to see if the desired byte has already been read, read it from the disk if not, store the byte in the specified location, and return.
All of that takes a long time—far, far longer than the rest of the main loop in Listing 1. In short, Listing 1. How can we speed up Listing 1. It should be clear that we must somehow avoid invoking DOS for every byte in the file, and that means reading more than one byte at a time, then buffering the data and parceling it out for examination one byte at a time.
The results confirm our theories splendidly, and validate our new design. As shown in Table 1. To the casual observer, read and getc would seem slightly different but pretty much interchangeable, and yet in this application the performance difference between the two is about the same as that between a 4. Make sure you understand what really goes on when you insert a seemingly-innocuous function call into the time-critical portions of your code. In other words, know the territory! The last section contained a particularly interesting phrase: the time-critical portions of your code.
Spend your time improving the performance of the code inside heavily-used loops and in the portions of your programs that directly affect response time.
Let C do what it does well, and use assembly only when it makes a perceptible difference. Like read , getc calls DOS to read from the file; the speed improvement of Listing 1. Easier, yes, but not faster. Consider this: Every invocation of getc involves pushing a parameter, executing a call to the C library function, getting the parameter in the C library code , looking up information about the desired stream, unbuffering the next byte from the stream, and returning to the calling code.
That takes a considerable amount of time, especially by contrast with simply maintaining a pointer to a buffer and whizzing through the data in the buffer inside a single loop. There are four reasons that many programmers would give for not trying to improve on Listing 1. The C library conveniently handles the buffering of file data, and it would be a nuisance to have to implement that capability.
The second reason is the hallmark of the mediocre programmer. Know when optimization matters—and then optimize when it does!
The third reason is often fallacious. C library functions are not always written in assembly, nor are they always particularly well-optimized. As an example, consider Listing 1. Clearly, you can do well by using special-purpose C code in place of a C library function—if you have a thorough understanding of how the C library function operates and exactly what your application needs done.
That brings us to the fourth reason: avoiding an internal-buffered implementation like Listing 1. The key is the concept of handling data in restartable blocks; that is, reading a chunk of data, operating on the data until it runs out, suspending the operation while more data is read in, and then continuing as though nothing had happened.
In Listing 1. At any rate, Listing 1. Always consider the alternatives; a bit of clever thinking and program redesign can go a long way. I have said time and again that optimization is pointless until the design is settled. When that time comes, however, optimization can indeed make a significant difference. These are considerable improvements, well worth pursuing—once the design has been maxed out.
Note that in Table 1. By the way, the execution times even of Listings 1. If a disk cache is enabled and the file to be checksummed is already in the cache, the assembly version is three times as fast as the C version. In other words, the inherent nature of this application limits the performance improvement that can be obtained via assembly.
What have we learned? Consider the ratios on the vertical axis of Table 1. Optimization is no panacea. This chapter has presented a quick step-by-step overview of the design process.
Create code however you want, but never forget that design matters more than detailed optimization. Certainly if you use assembly at all, make absolutely sure you use it right. The potential of assembly code to run slowly is poorly understood by a lot of people, but that potential is great, especially in the hands of the ignorant.
Some time ago, I was asked to work over a critical assembly subroutine in order to make it run as fast as possible. The task of the subroutine was to construct a nibble out of four bits read from different bytes, rotating and combining the bits so that they ultimately ended up neatly aligned in bits of a single byte.
I examined the subroutine line by line, saving a cycle here and a cycle there, until the code truly seemed to be optimized. When I was done, the key part of the code looked something like this:. Still, something bothered me, so I spent a bit of time going over the code again. Suddenly, the answer struck me—the code was rotating each bit into place separately, so that a multibit rotation was being performed every time through the loop, for a total of four separate time-consuming multibit rotations!
While the instructions themselves were individually optimized, the overall approach did not make the best possible use of the instructions. This moved the costly multibit rotation out of the loop so that it was performed just once, rather than four times. While the code may not look much different from the original, and in fact still contains exactly the same number of instructions, the performance of the entire subroutine improved by about 10 percent from just this one change.
The point is this: To write truly superior assembly programs, you need to know what the various instructions do and which instructions execute fastest…and more. You must also learn to look at your programming problems from a variety of perspectives so that you can put those fast instructions to work in the most effective ways. Is it really so hard as all that to write good assembly code for the PC?
Thanks to the decidedly quirky nature of the x86 family CPUs, assembly language differs fundamentally from other languages, and is undeniably harder to work with. On the other hand, the potential of assembly code is much greater than that of other languages, as well.
To understand why this is so, consider how a program gets written. A programmer examines the requirements of an application, designs a solution at some level of abstraction, and then makes that design come alive in a code implementation. If not handled properly, the transformation that takes place between conception and implementation can reduce performance tremendously; for example, a programmer who implements a routine to search a list of , sorted items with a linear rather than binary search will end up with a disappointingly slow program.
The process of turning a design into executable code by way of a high-level language involves two transformations: one performed by the programmer to generate source code, and another performed by the compiler to turn source code into machine language instructions. Consequently, the machine language code generated by compilers is usually less than optimal given the requirements of the original design. High-level languages provide artificial environments that lend themselves relatively well to human programming skills, in order to ease the transition from design to implementation.
The price for this ease of implementation is a considerable loss of efficiency in transforming source code into machine language. This is particularly true given that the x86 family in real and bit protected mode, with its specialized memory-addressing instructions and segmented memory architecture, does not lend itself particularly well to compiler design.
Even the bit mode of the and its successors, with their more powerful addressing modes, offer fewer registers than compilers would like. Assembly, on the other hand, is simply a human-oriented representation of machine language. As a result, assembly provides a difficult programming environment—the bare hardware and systems software of the computer— but properly constructed assembly programs suffer no transformation loss , as shown in Figure 2.
Assemblers perform no transformation from source code to machine language; instead, they merely map assembler instructions to machine language instructions on a one-to-one basis. The key, of course, is the programmer, since in assembly the programmer must essentially perform the transformation from the application specification to machine language entirely on his or her own.
The assembler merely handles the direct translation from assembly to machine language. The first part of assembly language optimization, then, is self. An assembler is nothing more than a tool to let you design machine-language programs without having to think in hexadecimal codes. So assembly language programmers—unlike all other programmers—must take full responsibility for the quality of their code.
Since assemblers provide little help at any level higher than the generation of machine language, the assembly programmer must be capable both of coding any programming construct directly and of controlling the PC at the lowest practical level—the operating system, the BIOS, even the hardware where necessary. High-level languages handle most of this transparently to the programmer, but in assembly everything is fair—and necessary—game, which brings us to another aspect of assembly optimization: knowledge.
In the PC world, you can never have enough knowledge, and every item you add to your store will make your programs better. Thorough familiarity with both the operating system APIs and BIOS interfaces is important; since those interfaces are well-documented and reasonably straightforward, my advice is to get a good book or two and bring yourself up to speed.
Similarly, familiarity with the PC hardware is required. While that topic covers a lot of ground—display adapters, keyboards, serial ports, printer ports, timer and DMA channels, memory organization, and more—most of the hardware is well-documented, and articles about programming major hardware components appear frequently in the literature, so this sort of knowledge can be acquired readily enough.
The single most critical aspect of the hardware, and the one about which it is hardest to learn, is the CPU. The x86 family CPUs have a complex, irregular instruction set, and, unlike most processors, they are neither straightforward nor well-documented true code performance. In fact, since most articles and books are written for inexperienced assembly programmers, there is very little information of any sort available about how to generate high-quality assembly code for the x86 family CPUs.
As a result, knowledge about programming them effectively is by far the hardest knowledge to gather. A good portion of this book is devoted to seeking out such knowledge.
Is the never-ending collection of information all there is to the assembly optimization, then? Knowledge is simply a necessary base on which to build. Basically, there are only two possible objectives to high-performance assembly programming: Given the requirements of the application, keep to a minimum either the number of processor cycles the program takes to run, or the number of bytes in the program, or some combination of both.
You will notice that my short list of objectives for high-performance assembly programming does not include traditional objectives such as easy maintenance and speed of development.
Those are indeed important considerations—to persons and companies that develop and distribute software. People who actually buy software, on the other hand, care only about how well that software performs, not how it was developed nor how it is maintained. Knowledge of the sort described earlier is absolutely essential to fulfilling either of the objectives of assembly programming. Knowledge makes that possible, but your programming instincts make it happen. And it is that intuitive, on-the-fly integration of a program specification and a sea of facts about the PC that is the heart of the Zen-class assembly optimization.
As with Zen of any sort, mastering that Zen of assembly language is more a matter of learning than of being taught. You will have to find your own path of learning, although I will start you on your way with this book. The subtle facts and examples I provide will help you gain the necessary experience, but you must continue the journey on your own. Each program you create will expand your programming horizons and increase the options available to you in meeting the next challenge.
The ability of your mind to find surprising new and better ways to craft superior code from a concept—the flexible mind, if you will—is the linchpin of good assembler code, and you will develop this skill only by doing.
Never underestimate the importance of the flexible mind. Good assembly code is better than good compiled code. High-level languages are the best choice for the majority of programmers, and for the bulk of the code of most applications.
When the best code—the fastest or smallest code possible—is needed, though, assembly is the only way to go. Simple logic dictates that no compiler can know as much about what a piece of code needs to do or adapt as well to those needs as the person who wrote the code. Given that superior information and adaptability, an assembly language programmer can generate better code than a compiler, all the more so given that compilers are constrained by the limitations of high-level languages and by the process of transformation from high-level to machine language.
Consequently, carefully optimized assembly is not just the language of choice but the only choice for the 1 percent to 10 percent of code—usually consisting of small, well-defined subroutines—that determines overall program performance, and it is the only choice for code that must be as compact as possible, as well. In the run-of-the-mill, non-time-critical portions of your programs, it makes no sense to waste time and effort on writing optimized assembly code—concentrate your efforts on loops and the like instead; but in those areas where you need the finest code quality, accept no substitutes.
Note that I said that an assembly programmer can generate better code than a compiler, not will generate better code. While it is true that good assembly code is better than good compiled code, it is also true that bad assembly code is often much worse than bad compiled code; since the assembly programmer has so much control over the program, he or she has virtually unlimited opportunities to waste cycles and bytes.
The sword cuts both ways, and good assembly code requires more, not less, forethought and planning than good code written in a high-level language. The gist of all this is simply that good assembly programming is done in the context of a solid overall framework unique to each program, and the flexible mind is the key to creating that framework and holding it together.
To summarize, the skill of assembly language optimization is a combination of knowledge, perspective, and a way of thought that makes possible the genesis of absolutely the fastest or the smallest code. With that in mind, what should the first step be?
Development of the flexible mind is an obvious step. Still, the flexible mind is no better than the knowledge at its disposal. The first step in the journey toward mastering optimization at that exalted level, then, would seem to be learning how to learn. The author had, however, chosen a small, well-defined assembly language routine to refine, consisting of about 30 instructions that did nothing more than expand 8 bits to 16 bits by duplicating each bit. In short, he had used all the information at his disposal to improve his code, and had, as a result, saved cycles by the bushel.
There was, in fact, only one slight problem with the optimized version of the routine…. As diligent as the author had been, he had nonetheless committed a cardinal sin of x86 assembly language programming: He had assumed that the information available to him was both correct and complete.
While the execution times provided by Intel for its processors are indeed correct, they are incomplete; the other—and often more important—part of code performance is instruction fetch time, a topic to which I will return in later chapters. Assume nothing. I cannot emphasize this strongly enough—when you care about performance, do your best to improve the code and then measure the improvement.
Ignorance about true performance can be costly. When I wrote video games for a living, I spent days at a time trying to wring more performance from my graphics drivers. I rewrote whole sections of code just to save a few cycles, juggled registers, and relied heavily on blurry-fast register-to-register shifts and adds. As I was writing my last game, I discovered that the program ran perceptibly faster if I used look-up tables instead of shifts and adds for my calculations. In truth, instruction fetching was rearing its head again, as it often does, and the fetching of the shifts and adds was taking as much as four times the nominal execution time of those instructions.
Ignorance can also be responsible for considerable wasted effort. The letter-writers counted every cycle in their timing loops, just as the author in the story that started this chapter had. Like that author, the letter-writers had failed to take the prefetch queue into account. In fact, they had neglected the effects of video wait states as well, so the code they discussed was actually much slower than their estimates.
The proper test would, of course, have been to run the code to see if snow resulted, since the only true measure of code performance is observing it in action.
Clearly, one key to mastering Zen-class optimization is a tool with which to measure code performance. The can be started at the beginning of a block of code of interest and stopped at the end of that code, with the resulting count indicating how long the code took to execute with an accuracy of about 1 microsecond.
To be precise, the counts once every A nanosecond is one billionth of a second, and is abbreviated ns. Listing 3. On the other hand, it is by no means essential that you understand exactly how the Zen timer works. Interesting, yes; essential, no. ZTimerOn is called at the start of a segment of code to be timed. ZTimerOn saves the context of the calling code, disables interrupts, sets timer 0 of the to mode 2 divide-by-N mode , sets the initial timer count to 0, restores the context of the calling code, and returns.
Two aspects of ZTimerOn are worth discussing further. One point of interest is that ZTimerOn disables interrupts. Were interrupts not disabled by ZTimerOn , keyboard, mouse, timer, and other interrupts could occur during the timing interval, and the time required to service those interrupts would incorrectly and erratically appear to be part of the execution time of the code being measured. As a result, code timed with the Zen timer should not expect any hardware interrupts to occur during the interval between any call to ZTimerOn and the corresponding call to ZTimerOff , and should not enable interrupts during that time.
A second interesting point about ZTimerOn is that it may introduce some small inaccuracy into the system clock time whenever it is called. The actually contains three timers, as shown in Figure 3. Each of the three timers counts down in a programmable way, generating a signal on its output pin when it counts down to 0.
Timer 2 drives the speaker, although it can be used for other timing purposes when the speaker is not in use. As shown in Figure 3. On the other hand, the output of timer 2 is connected to nothing other than the speaker. Timer 1 is dedicated to providing dynamic RAM refresh, and should not be tampered with lest system crashes result. Finally, timer 0 is used to drive the system clock. A millisecond is one-thousandth of a second, and is abbreviated ms. This line is connected to the hardware interrupt 0 IRQ0 line on the system board, so every Each timer channel of the can operate in any of six modes.
Timer 0 normally operates in mode 3: square wave mode. In square wave mode, the initial count is counted down two at a time; when the count reaches zero, the output state is changed. The initial count is again counted down two at a time, and the output state is toggled back when the count reaches zero. The result is a square wave that changes state more slowly than the input clock by a factor of the initial count.
In its normal mode of operation, timer 0 generates an output pulse that is low for about Square wave mode is not very useful for precision timing because it counts down by two twice per timer interrupt, thereby rendering exact timings impossible.
Fortunately, the offers another timer mode, mode 2 divide-by-N mode , which is both a good substitute for square wave mode and a perfect mode for precision timing. Divide-by-N mode counts down by one from the initial count.
When the count reaches zero, the timer turns over and starts counting down again without stopping, and a pulse is generated for a single clock period. As a result, timer 0 continues to generate timer interrupts in divide-by-N mode, and the system clock continues to maintain good time.
Why not use timer 2 instead of timer 0 for precision timing? We need the interrupt generated by the output of timer 0 to tell us when the count has overflowed, and we will see shortly that the timer interrupt also makes it possible to time much longer periods than the Zen timer shown in Listing 3. In fact, the Zen timer shown in Listing 3.
Fifty-four ms may not seem like a very long time, but even a CPU as slow as the can perform more than 1, divides in 54 ms, and division is the single instruction that the performs most slowly. If a measured period turns out to be longer than 54 ms that is, if timer 0 has counted down and turned over , the Zen timer will display a message to that effect. A long-period Zen timer for use in such cases will be presented later in this chapter.
The Zen timer determines whether timer 0 has turned over by checking to see whether an IRQ0 interrupt is pending. Remember, interrupts are off while the Zen timer runs, so the timer interrupt cannot be recognized until the Zen timer stops and enables interrupts. If an IRQ0 interrupt is pending, then timer 0 has turned over and generated a timer interrupt.
Recall that ZTimerOn initially sets timer 0 to 0, in order to allow for the longest possible period—about 54 ms—before timer 0 reaches 0 and generates the timer interrupt. Since timer 0 is initially set to 0 by the Zen timer, and since the system clock ticks only when timer 0 counts off In addition, a timer interrupt is generated when timer 0 is switched from mode 3 to mode 2, advancing the system clock by up to Finally, up to Net result: The system clock will run up to ms about a ninth of a second slow each time the Zen timer is used.
Potentially far greater inaccuracy can be incurred by timing code that takes longer than about ms to execute. Recall that all interrupts, including the timer interrupt, are disabled while timing code with the Zen timer. The interrupt controller is capable of remembering at most one pending timer interrupt, so all timer interrupts after the first one during any given Zen timing interval are ignored. Consequently, if a timing interval exceeds System that have battery-backed clocks, AT-style machines; that is, virtually all machines in common use automatically reset the correct time whenever the computer is booted, and systems without battery-backed clocks prompt for the correct date and time when booted.
Also, repeated use of the Zen timer usually makes the system clock slow by at most a total of a few seconds, unless code that takes much longer than 54 ms to run is timed in which case the Zen timer will notify you that the code is too long to time.
ZTimerOff saves the context of the calling program, latches and reads the timer 0 count, converts that count from the countdown value that the timer maintains to the number of counts elapsed since ZTimerOn was called, and stores the result.
Immediately after latching the timer 0 count—and before enabling interrupts— ZTimerOff checks the interrupt controller to see if there is a pending timer interrupt, setting a flag to mark that the timer overflowed if there is indeed a pending timer interrupt. After that, ZTimerOff executes just the overhead code of ZTimerOn and ZTimerOff 16 times, and averages and saves the results in order to determine how many of the counts in the timing result just obtained were incurred by the overhead of the Zen timer rather than by the code being timed.
Finally, ZTimerOff restores the context of the calling program, including the state of the interrupt flag that was in effect when ZTimerOn was called to start timing, and returns. One interesting aspect of ZTimerOff is the manner in which timer 0 is stopped in order to read the timer count.
We simply tell the to latch the current count, and the does so without breaking stride. ZTimerReport first checks to see whether the timer overflowed counted down to 0 and turned over before ZTimerOff was called; if overflow did occur, ZTimerOff prints a message to that effect and returns. Otherwise, ZTimerReport subtracts the reference count representing the overhead of the Zen timer from the count measured between the calls to ZTimerOn and ZTimerOff , converts the result from timer counts to microseconds, and prints the resulting time in microseconds to the standard output.
There are many ways to deal with this. A second approach is modification of ZTimerReport to place the result at some safe location in memory, such as an unused portion of the BIOS data area. A third approach is alteration of ZTimerReport to print the result over a serial port to a terminal or to another PC acting as a terminal. Similarly, many debuggers can be run from a remote terminal via a serial link.
A final approach is to modify ZTimerReport to print the result to the auxiliary output via DOS function 4, and to then write and load a special device driver named AUX , to which DOS function 4 output would automatically be directed.
This device driver could send the result anywhere you might desire. The result might go to the secondary display adapter, over a serial port, or to the printer, or could simply be stored in a buffer within the driver, to be dumped at a later time.
Credit for this final approach goes to Michael Geary, and thanks go to David Miller for passing the idea on to me. Go to it! The Zen timer subroutines are designed to be near-called from assembly language code running in the public segment Code.
The Zen timer subroutines can, however, be called from any assembly or high-level language code that generates OBJ files that are compatible with the Microsoft linker, simply by modifying the segment that the timer code runs in to match the segment used by the code being timed, or by changing the Zen timer routines to far procedures and making far calls to the Zen timer code from the code being timed, as discussed at the end of this chapter.
All three subroutines preserve all registers and all flags except the interrupt flag, so calls to these routines are transparent to the calling code. If you do change the Zen timer routines to far procedures in order to call them from code running in another segment, be sure to make all the Zen timer routines far, including ReferenceZTimerOn and ReferenceZTimerOff.
Please be aware that the inaccuracy that the Zen timer can introduce into the system clock time does not affect the accuracy of the performance measurements reported by the Zen timer itself.
On the other hand, there is certainly no guarantee that code performance as measured by the Zen timer will be the same on compatible computers as on genuine IBM machines, or that either absolute or relative code performance will be similar even on different IBM models; in fact, quite the opposite is true. The differences were minor, mind you, but my experience illustrates the risk of assuming that a specific make of computer will perform in a certain way without actually checking.
Not that this variation between models makes the Zen timer one whit less useful—quite the contrary. The Zen timer is an excellent tool for evaluating code performance over the entire spectrum of PC-compatible computers. This listing measures the time required to execute 1, loads of AL from the memory variable MemVar. Note that Listing 3.
When Listing 3. This approach lets us avoid reproducing Listing 3. Note that only after the initial jump is performed in Listing 3. BAT Listing 3. ASM contains Listing 3. The same is true of Listing 3. Assuming that Listing 3. ASM and Listing 3. BAT, the code in Listing 3. When the above command is executed on an original 4. While the exact number is 3. Exactly why that is so is just what this book is all about. In order to perform any of the timing tests in this book, enter Listing 3.
ASM, enter Listing 3. ASM, and enter Listing 3. Then simply enter the listing you wish to run into the file filename and enter the command:. Code fragments you write yourself can be timed in just the same way. So this is gonna happen over the Christmas break but I need to start prepping early I have, in my collection, more or less one PC from each generation of hardware.
But they're all a bit all over the place, and I often upgraded the hardware in them with more modern bits and pieces which don't suit the original era. So, let me know what parts you had in yours, what OS you were running, what was typical, etc. Especially for the older stuff as I acquired it all a good few years after it was mainstream and it was often already upgraded.
Once they're all done, all that has to go 2 kids, no space etc Without further ado, let's start right back at the start! Last edited: Apr 19, We had the family PC the below as well, but I wasn't allowed to pull that apart or mess around with it at all. This one was mine all mine maniacal laugh Interesting note: This boots from off at mains, into DOS, into me playing Paganitzu faster than my X99 rig.
Mmmmm the code is so close to the hardware you can taste it! None of which ever did anything useful. CPU: Intel ? A "large" note come on, I was like 7 years old. Mum paid and I paid her back with all the change in my money tin. Suggestions Needed: - did anyone have EGA cards in these? Last edited: Dec 7, So I had one of these, or it might've been the XT Model , also purchased from my primary school. Regrets in my life? Not many. Giving one of these away?
0コメント