Borrow a friend’s Mac computer and utilize a FireWire cable to connect your computer to it. ‘Target Boot’ your Mac and remember to hold down the T key as you start. You are lucky when your files appear on your friend’s computer. In this case, the operating system of your hard drive is still fine but it needs to be reinstalled.
Mac hard drive recovery is also possible by employing good recovery software tailored for Mac OS. Some can be bought from stores and some are downloadable. There are free downloadable recovery software products from reputable agents as well. These products often work similarly: install the software, select the source (defective drive) and choose a destination folder where you can safely store your retrieved data.
One of the options for Mac hard drive recovery is using the Disk Utility. You can do this by popping the Mac OS Install DVD into the optical drive. Press the Alt/Option key while you power up the PC. From the top menu, access the Disk Utilities, and then click Verify Disk and then Repair Disk under First Aid. When things are really out of hand, then it’s time to let your sick hard drive see a specialist. Costly as it may be, if your files are a treasured possession, then you’d be willing to bear the crunch.
Now, thanks to the existence of Mac hard drive repair, you should not be worried about losing your valuable data from your hard disk. All you have to do is to go to a specialized shop in order to find out about high quality data recovery providers that can provide express recovery, Mac hard drive recovery and other things like that. Such a contact can prove to be very useful for companies that are using a lot of computers.
In my opinion, spending a high amount of money on hard disks is the best thing you can do in order to avoid losing all your valuable data, when your hard disk is damaged. However, if you want to find out more advantages or disadvantages of using specific models of hard disk, you need to do some research on the internet. You will find out that there are hundreds of articles on specialized websites about this topic. A lot of people are talking about it right now!
Hard drives store all data in your computer system. When your hard drive gets broken, you will definitely have a bad day. Finding out how to fix it becomes your main concern. Experts suggest that throwing away broken hard drives is the next best thing to do and replace it with a new one. However, most people do not like the idea of immediately throwing away what is valuable to them. That is why many people prefer to find ways for hard drive repair.
The first step when fixing your hard drive is to make sure that the drive is totally dead. Check your computer’s BIOS settings to ensure the detection of broken drives. Wires, ports and other connectors must also be checked carefully. Moreover, understand all the details that you need to know about hard drive repair.
Hold the drive using one hand and carefully spin it back and forth. Observe any noise while doing this. If you cannot hear any noise, a possible hard drive jam has happened. Some drives become very hot to touch. When you hear a rhythmic click while spinning, this indicates that the drives are not jammed. If after trying all the steps for successfull hard drive data recovery and you still see no results, use the chkdsk utility for further evaluation.
Looking At Hard Drive Failures
Some people like to do things themselves, but there are some problems that cannot be fixed by just one’s own hands, and a great example of this is hard drive failure. Hard drive repair should be left to the hands and skills of experts, especially if you want to recover your files.
Hard drives fail for so many reasons, and laptop and external hard drives are more prone to break downs than desktop drives. Because both laptop and external hard drives are portable and can be carried anywhere, they are constantly subjected to sudden movements. Excessive motion can cause the sensitive components of a laptop hard drive to bump against each other, rendering data unreadable. In fact, this reason is the most common cause of laptop hard drive failure. Another reason the hard drive may be faulty could be because of accidents like dropping the laptop or external hard drive, or spilling water on it. Improper handling or uneven and unstable surfaces can cause the laptop to accidentally fall on the ground, and the impact of such a fall can damage the sensitive components of the laptop’s hard drive. Of course, the length of service your computer has rendered could also be a cause. Like any machines, hard drives suffer from wear and tear. Abusive use of computers such as leaving it running overnight or allowing your laptop to overheat can also can considerable damage to the computer’s hard drive.
One of the best things about laptop computers is that they are portable. Because of their small size, a laptop can be easily carried everywhere. People can use it wherever they are, whether they are in the office, in a restaurant or at the park. Its mobility allows it to be of great use to people who want to maximize their time. But it is also its portability feature which makes it more prone to damage. Since a laptop carries the miniature versions of desktop computer parts, it is more sensitive. If you accidentally drop your knapsack which has your laptop inside it, there is a possibility that its hard drive may be damaged and this could lead to data loss. Data loss can have severe personal repercussions, so people go on a quest for laptop data recovery which can sometimes bring them to companies like Hard Drive Recovery Associates.
Once you have ascertained that your laptop is busted, your next move is to make sure that your files can be extracted. You can retrieve the data on your own, or you can bring your laptop to the nearest repair center and have the technician retrieve the data for you. The first option is a viable one for people who are knowledgeable about computers, but for those who are insecure about their computer skills, it is better to opt for the second choice. There is a growing trend among computer users today, even those with mediocre skills, to use data recovery software to retrieve laptop data. Data recovery software is undoubtedly a great tool for data retrieval, but it must also be used with caution as it is possible that in your attempt for laptop data recovery, you can overwrite the data on your hard drive and lose the files.
Laptops lose data for many reasons. A common data loss reason is human error, like when you accidentally delete a file. It could also be due to a virus that attached itself to your computer system, corrupting your files and making your hard drive crash. You can also lose data due to hard drive problems like logical failures and head crashes. Whatever the reason for data loss, losing files greatly impacts computer users, especially if the file lost is an important one. That is why laptop data recovery is not an option, it is a must.
A great option for laptop data recovery is professional data recovery tools that can be purchased online. There are many such software recovery tools available, but they were all designed to access the information in your hard drive, sift through the data, recover any files that have not been fully corrupted or overwritten, and restore the files in a readable way. There are basically two kinds of recovery software products that work for laptops. The first kind of software can be used without dismantling your laptop and can be loaded into the computer using the laptop’s CD/DVD drive. This is a good option for those who wanted to save their laptop. The second type of software is used on a hard drive that has been removed from the laptop. This option means partially ruining your laptop, especially if you do not have the tools needed to open your laptop in the first place.
RAID data recovery is not very easy to perform as it needs a high level of RAID technical know-how and prior experience in handling RAID data drives. Missteps in RAID recovery might bring about a permanent loss of data, which can be very ugly in a business setting. Parallel concerns should be taken in restoring the function of the storage equipment and protecting the data in the device. RAID failures can only be treated effectively by an expert data recovery service that routinely solves RAID recovery problems.
About RAID Recovery Software
The world has become highly technically advanced and data storage too has found a new way known as RAID in which many disk drives are combined to form a single logical unit. RAID recovery has become important, as RAID arrays are sometimes corrupted and need to be fixed. Recovering the corrupted RAID array is not a handy process and it needs enough time and effort, which nowadays is eased by RAID recovery software that are available in the IT market. The most significant feature of RAID is that it allows you to recover and save the data and files to another hard disk or storage before fixing the corrupted array.
Initially the RAID recovering process spots the type of the RAID array while allowing it to function properly. Most often, it is obvious that RAID arrays would be corrupted and definitely need the assistance of RAID recovering software in order to sort out the arrays and fix them to function properly. Almost any type of RAID array can be recognized and configured using these types of recovery software. It is common that RAID arrays are sometimes connected to RAID controllers and RAID enabled motherboards, which too can be reconstructed using RAID recovery software.
Failure of Redundant Array of Independent or RAID can be frustrating that can sometimes lead you to act on your own to solve the problem. Though you may consider yourself as tech-savvy enough, it still pays to be careful when addressing RAID drive problems any kind of RAID 5 recovery. There are some things to keep in mind and things you should not do when it comes to repair and recovery. For starters, do not pursue the RAID recovery if you are sure that at least one disk drive is considered a failure. Multiple failures of disks should lead you to seek professional RAID help instead. It is also not recommended that you rebuild the RAID if you are not completely sure that all the member hard disks are ready, healthy and accounted for.
The rebuilding of a RAID disk is also not recommended if the controller does not perform well and did not pass any test. Do not dismiss the warning of any signs of irregularity. When you are working on the RAID components, do not make it a point to take out more than one disk at a time from their location so that you will still be on top of sequencing. Staying away from these usual mistakes can help you prevent further problems.
If you are faced with a failure of the RAID disk, do not use all your energy to worry or tell friends how frustrated you are with the situation. Just remember that just like other things, a problem with the RAID has solutions as well. In this case, one solution that you can do when it comes to RAID recovery is use RAID utility software. If you are planning to use this, then make sure to shop for software now and download the software to your computer. Here are some general steps on how you can use the software for RAID recovery.
Keep in mind that highly specific and modern software will have specific rules. The first thing that you need to do is to launch the software for RAID recovery and select the module depending on the reason for the loss of data. For example, choose ‘Deleted File Recovery’ for deleted files. From here, just select the files that should be recovered. Once done, select the volume where data is located and simply choose ‘NEXT’. From here, you will see the volumes and potential locations for file loss. You may be asked to select volumes that you want to recover and click on ‘NEXT’. You may also preview the firsts first before you click and save the data on the partition.
Losing necessary data is one of the most tragic occurrences a business can suffer from. Not only will the operation be halted; profits will also have to wait to be reaped. It is a big no for businesses to be lax in their computer servicing and maintenance. Of course, the best way to avoid Raid 5 recovery costs is to schedule a monthly or daily data backing up. Daily back-up is more appropriate if your network of computers is that broad. Just imagine the cost and losses that the business will incur if you lost all the data that you have in your system.
Raid 5 recovery is difficult. There was one reported case wherein a particular hospital stopped operating because the system failed. It took the IT experts three weeks to recover all the images in each of the disk drives forming the array of inexpensive disks. The IT experts themselves even confessed that they had an extremely hard time recovering the images and decoding it to restore all the lost data. As the It expert who unlocked the Raid 5 mystery in that hospital, it is still best to do data backing-up to avoid raid 5 recovery problems.
The very basic and honest answer to this question is because RAID 5 recovery is never easy. Even seasoned IT experts confess that the same is not, in any manner, a joke. RAID stands for redundant array of independent disks. Before, it was termed as redundant array of inexpensive disks. A minimum of three hard drives is needed to form RAID 5. One of the three hard drives will be allocated for parity data. Parity data will be used to generate the images once any of the disk drives will fail.
In RAID 5 recovery procedures, it is beneficial to confine each storage space on every apparatus independently. The consequential drive “”images”" are then used to facilitate the rebuilding of the original array formation and likewise to restore the necessary records. Even the terms and processes which could aid a layman in the recovery process sounds gibberish, which only proves that a thorough technical know-how is of primordial importance. It is also an imperative that one has substantial background in data storage mechanisms and has an in-depth computer software and hardware troubleshooting skills.
Avoiding RAID 5 recovery is better compared to have to go through it. You save yourself the money and the effort by setting up schedules systems backing-up.
Before a RAID 5 recovery, one has to determine RAID 5 configuration parameters. The configuration is what brings about the process of discovering the data that has been lost. The configuration consists of the number of the disks that were in the configuration, the sequence of the disk that was involved, in the array, the block size that was used, the parity pattern that could have been used in the RAIDS, the location of the spare blocks should also be determined in case RAID5 E and RAID5 EE.
Every job needs an expert in the field, and it is therefore advisable to make sure you get the best and the right person to tackle the RAID 5 recovery, or at least have good information to start with. Basically, you need to visit the web and maybe have some look of the best IT consultants or you can decide to ask a friend, or anyone who would have been involved in such a situation.
To recover a parameter, you can decide to do it manually or automatically. Doing anything manually usually becomes technical and leads to use of lots of thinking and time, it is therefore advisable to be aware that it could take time than expected. A RAID 5 recovery costs the same amount that any other RAID server configuration would.
Questions abound in the enterprise server industry. For instance, if you had to stake your business on PC-based technology or go the way of larger, legacy systems, what would you do?
Some vendors in the enterprise server arena plan to sway users into moving to PC-based enterprise servers. The mainframe era is dead, they say.
Opponents argue that the mainframe is the only machine that can do the job. Anything else presents too many management headaches. You can probably guess which side Intel Corp. is on.
The king of desktop PCs is hoping it can maneuver into the server business in a big way. Key to Intel’s strategy is its recently introduced Pentium Pro microprocessor.
Optimized to take advantage of true 32-bit software, Intel plans to catapult itself into the server market with Microsoft Corp.’s Windows NT by its side. Vendors of Intel-based products speculate that the chances of Intel succeeding in its enterprise strategy are quite good.
“Intel obviously has its act together and it sees in the long term that if it wants to continue to have the presence on the desktop, it also needs to have a presence at the enterprise (level) as well,” says Lary Evans, senior vice president and general manager, the server group, at Dell Computer Corp., based in Austin, Texas.
“Over the last couple of years Intel has made substantial investments on being a serious player in the enterprise market,” he says.
Robert Lorentz, a server specialist with NEC Technologies Inc., based in Mississauga, Ont., agrees with Evans in the sense that companies want to move away from large, expensive systems. He says these companies want to migrate to smaller, but more powerful PC-based servers. It’s where Intel comes into play that he offers a differing opinion.
“You have to ask yourself why Microsoft wanted a dual strategy,” Lorentz says. “When you look at the Microsoft CDs, you (use that software on) Intel or RISC.”
Consequently, NEC Technologies is the world’s largest manufacturer of MIPS microprocessors designed on a RISC architecture from MIPS Technologies Inc., a subsidiary of Silicon Graphics Inc.
Intel may made inroads into the enterprise server segment, but only to a point, one consultant says. He says that because of the high end technologies that are required for RAID array based servers, it is necessary to team up with an experienced RAID recovery company like HDRA in order to best limit hard disk failures. RAID failures are estimated to cost North American business over $300 million per year.
“I think Intel works quite fine in the server environment up to a certain level,” says Don Thompson, a senior manager with the Deloitte & Touche Consulting Group in Toronto. “But past that, if you’re really looking for a lot of performance … you are probably looking to a more RISC-like architecture.”
Despite the performance RISC offers, the PowerPC family of processors — based on IBM’s Power architecture, an acronym for Performance Optimized with Enhanced RISC — may not play a major role in the enterprise server domain, Thompson says.
“I think as fast as they (the PowerPC alliance) move, Intel will still be with them. Because Intel has so much market share the PowerPC is certainly not going to make a huge penetration.”
He points out, however, that competition from the PowerPC makes Intel’s production schedule that much more aggressive.
As for using PCs as servers, an IBM Canada Ltd. manager says users should be careful.
Norbert Dawalibi, general manager of large scale computing at IBM Canada, says trying to manage a large collection of PC-based servers would be strenuous for even the most enthusiastic MIS person.
“What people don’t realize is that the real cost in IT is in managing and people,” Dawalibi says. “If you have more than five or 10 PC servers, it becomes a lot more expensive to manage.”
According to the consulting firm Gartner Group, based in Stamford, Conn., the server end of the client-server model is taking over. A Gartner Group research note explains: “As enterprises deploy larger client-server applications, they are finding that management issues … are becoming more difficult.
Although this may sound more like a software management issue, the effect it will have on server hardware is considerable.
Gartner Group points out that these management issues are attributable to the “fat client” which puts much of the processing logic, including presentation work, business rules and even data input and output logic, on the client PC.
Second-generation client-server applications are going to use what Gartner calls the “thin client” or the “fat server” model, which implies that more of a load is placed on the server and the PC’s workload is reduced.
“What people don’t realize is that if I have 5,000 PCs out there and I start putting all of the logic in those PCs, it’s very difficult if not impossible to manage,” Dawalibi says.”
“Simple things like installing a new version of software or putting a correction on the software that’s already there becomes a nightmare. Whereas if you have a fat server model, you just do it.”
That model is expected to have a profound impact on enterprise servers and hardware design, Dawalibi says.
“Lots of people started client-server applications using PCs. What you realize when you do that is that a very robust server is needed in the back to handle those applications.”
Data is the most important aspect any company has, Dawalibi says, and “that’s not going to go away. You need to house it in some sort of data repository. Typically, that’s where the mainframes and the large minis are going to sit. Really, you are looking at putting another server down from that which is really a small engine itself or a replication server, database engine, or communications server. The mainframe is simply going to be the vault.”
Enterprise servers are moving to a more streamlined design, coming in smaller
RAID servers have replaced old style mainframes.
boxes and using more power processors. But that doesn’t mean that the larger mainframe should be discounted altogether, Dawalibi says.
“It’s coming back as the enterprise server for larger enterprises. Clearly it’s not the answer for small companies but if you’re a government, insurance company, bank, or large manufacturer, you will find that there’s nothing else that can do the job.”
Dell’s Evans disagrees. “Over time” the mainframe is dying, it just doesn’t know it yet,” he says. “We are a couple of years away, but over time I do (see that happening).”
One method for achieving higher performance with PCs is to cluster them together. This technology promises to bring enterprise servers to an even higher level of performance. Evans says this will be the year when clustering finally matures.
“The reason I say that is because that technology is going to be driven by the software, not by the hardware. That’s because the time frame I think Microsoft will have its first release of clustering software,” he says. “And that’s really what is going to make it happen.” Relying on Microsoft to “make it happen” might be a correct statement if operating system trends are any indication.
Low to mid-range enterprise servers are frequently using Windows NT as the choice operating environment, says Thompson. “You could look at it as Unix being pretty dominant on the enterprise level, in fact the mainframe is still there to some extent,” he says.
Nobody anticipates a hard drive problem and if this results in a huge data loss then it will be a killer moment, for sure. The best David do when you have some kind of major hard drive problem is simply to relax, and not worry too heavily that you will not be able to retrieve the data from your broken hard drive. It’s always key to remember that a professional hard drive recovery company will most likely always be able to retrieve the data from your drive, and it is often possible that a data recovery software can do it quicker and cheaper than you might have thought.
Hard Drive Status Issues
Sometimes hard drives do not get detected. When the computer system doesn’t discover the particular disk drive that you expected to find, or perhaps the system encounters booting difficulties, you may have a bad disk drive Florida could possibly be your mother board SATA Ports. PCB boards are likely to Breakdown over time because of the temperature generated by the disk drive itself. The hard disk May get discovered By the Microsoft Windows operating system and may seem available despite the fact that the data is inaccessible.
As well, you receive a message indicating that your broken down drive has to be formatted. In cases like this, you most likely have a corrupted file system. Formatting is Definitely not a good choice when you have this kind of problem because this will simply delete all of your data before you can even have the opportunity to recover it.
Types of Hard Drive Problems
Hard drive restoration performed by a professional data recovery service company is not always necessary, but it frequently is. Hard drive problems may be classified directly into a couple of forms:
Logical problems or failures: These kinds of complications arise once file system corrupts. This disk drive is okay nevertheless the data files cannot be accessed. The cause intended for plausible complications may be data files deleted by accident as well as intentionally, malware attack as well as the particular disk drive may very well be reformatted throughout miscalculation as well as inadequate software package. Virtually all logical hard drive problems may be recoverable without opening the hard drive. Logical information recovery software can do this.
Physical problems or failures: This disk drive may be faulty when it isn’t really spinning, you will discover complications looking at the particular drive, has bad sectors as well as generating noise. This is the consequence of manufacturing problem, mechanical result as well as voltage oscillations. Recovery of data can be done yet since the hard drive has physical complications it will possibly be binned immediately after data restoration. The only technique to recover data from a physically failed hard drive is to either replacing the dented hard drive or move the platters to other donor drive. This process needs special treatment.
External hard drive failure is normally noted when the computer does not recognize that the external hard drive is connected to the system. This is the first sign of a major external hard drive failure. A key other symptom is that the external hard drive begins to make clicking noises when it is having spindle and platter problems. This clicking noise is a sign of an imminent crash. In such a situation, the first thing you have to do is to unplug the hard drive to prevent further damage. However, it is better to find out whether all these symptoms are due to physical failure, or simply file system problems.
How are you going to confirm that your external hard drive has a problem? Simply connect another working external hard drive to the USB port of the computer. If that hard drive is recognized by the computer, there should be a failure of the original external hard drive. However, the hard drive you use to check should be matched with the features of the affected hard drive. Otherwise, your findings might be wrong. If you use the right hard drive to check and it is recognized by the computer, you can conclude that the failed hard drive has a failure.
Why External Hard Drives Are Popular
An external hard drive works as a storage device, which can store data (at the time of this writing) in the 3-4 TB range and below. Most computer users use external hard drive devices to store their extra information and things like music and photographs. Sometimes, computer users use this device to store more confidential information, which is required to move place to place. If such an external hard drive gets corrupted, the user will be in a very tough situation.
The user may have to format the hard drive to recover from the external hard drive failure; however, he will lose all the data stored in the hard drive. Formatting the failed hard drive recovers all the problems with the hard drive and it will function as before.
But the problem is recovering the data from the corrupted hard drive. There is nothing to be worried about, as professional data recovery service companies often have good external hard drive recovery advice, but it can come at a cost. External hard disk drive failures can be recovered and the data stored also can be recovered.
Even if your external hard drive is dead, methods are still available to be recovered. First, you can try from your computers operating system to recover the data. The operating system allows you to recover the files from your damaged external hard drive. If your external hard drive is dead, you can use USB to Sata or Idle converter to recover your files.
You cannot point out whenever your personal computer hard drive will likely crash. It could be really disappointing due to the fact that if the hard drive experiences damage, it takes away important data with it. If other computer accessories break down, they can be easily fixed or replaced. But when it comes to a hard drive problem that may include platter failure or water damage, troubles can be huge.
Therefore, it will become extremely important to back up important computer data frequently as a way to handle hard drive troubles properly. This article points out a few handy tips to ease the detection of hard drive problems.
Before booting the computer, it may collapse; accompanied by sounds like “klung klung”. Some hard disk drives collapse with a moan.
A hard disk drive gradually builds up physical issues, and if you logically detect these issues, you may avoid real data loss. Therefore, it is rather vital that you detect hard drive problems asap. In order to discover these types of issues, you can make use of a variety of different software products. The software analyzes the disk condition and warns you if any error subsists. You can use the chkdsk function of Windows. But it is not very useful overall, because it takes too much time. You need not check hard drive problems everyday if you use software. Just set the date and time if you are away from PC.
There are many hard drive problems that can occur. If your hard drive is making a strange noise continuously, it is the first sign of a dangerous hard drive problem. Also, if your built in defragmentation program directs you to do a test to check whether your hard drive is performing well, you perform that test.
Once you begin to see the signs of a hard disk failure, it is advisable to check whether your hard drive is having a problem as early as possible to prevent the drive from suffering from further damage. Otherwise, your hard drive will end up requiring a specialized service. One is here.
Basic Hard Drive Failure Tests
There are several basic test tools available freely to check whether your hard drive is having a problem or will soon crash or not. For users of Window XP or Windows Vista, an “Error Checking” program is available. This “Error checking” program will find put basic hard disk errors and fix them. If free testing programs do not work, you will have to go for experts to recover the problem. There are many companies that work to resolve hard drive problems. They do more powerful testing to find out the exact problem in your hard drive. For example, SpinRite is one of the most powerful tools used today to test hardware problems. Hard Drive Mechanics is another tool used by professional data recovery service companies for diagnosis and repair the hard drive problems.
Though DOS is very popular in the embedded world, it does have some major drawbacks. First is the famous memory barrier limiting DOS-based applications to just 640 kbytes in size. In addition, DOS doesn’t have built-in support for multitasking. And because it is a 16-bit operating system, DOS cannot use the latest 32-bit C++ development tools from Microsoft and Borland International Inc.
How can the embedded world get around the limitations of DOS? One possible solution is Windows NT. When Microsoft created Windows NT, it first designed the Win32 application programming interface, a definition of the system calls that programmers use to write applications. One of the design goals of the Win32 API is to make it easy to port existing 16-bit Windows applications to 32 bits. Microsoft maintained compatibility in the Win32 API by keeping most existing Windows system calls for writing user-interface code.
Microsoft’s second goal for the Win32 API is even more relevant to the real-time world: industrial-strength multitasking. The foundation of the Win32 API is found in the roughly 300 Kernel32 functions, which provide all the services one expects in a multitasking operating system. Among them are memory allocation, file access, process control, multitasking, synchronization and interprocess communications.
Win32 – so many new errors, so little time.
A Win32 application that uses only the Kernel32 functions but not the Windows GUI is called a console application. These applications-which include development tools, database engines and utilities-are more like character-based MS-DOS and Unix applications. When Windows NT is being used in a real-time operating system, the real-time software is typically built as a Win32 console application.
The major innovation in the Win32 API, in relation to real-time operating systems, is the inclusion of threads. While other OSes allow only multitasking to function between multiple applications, Win32 API-based threads allow programmers to do multitasking inside a single application. Though Windows NT was not the first OS to support threads, threads were integral to its design; they were not an afterthought.
The primary advantage of threads is that they have very little overhead. For example, during a context switch, only the CPU register set has to be changed. In addition, threads can communicate quickly and easily because they shared a single address space. In terms of CPU cycles, threads offer real-time operating systems the least expensive method of doing multitasking.
The Win32 API also supports processes, the more traditional approach to running multiple applications at once. Each Win32 process, which may contain one or more threads, gets its own private address space. Processes also have other private resources, such as open files, pipes and so on.
Threads are not just a concern of the operating system. They need to be supported by the entire development tool chain, including the compiler, linker, debugger and run-time libraries. For example, a multithreaded application requires a more sophisticated, “thread-aware” debugger. When a breakpoint occurs, looking at the CPU registers to see the inner workings of the application may not be sufficient. It may also be necessary to examine the CPU registers that belong to other threads in the application. In addition, thread-aware debuggers make it possible to set breakpoints for specific threads, and have commands that start and stop the execution of individual threads.
Thread-safe run-time libraries are also important. A thread-safe library regulates use of the library, avoiding collisions between different threads accessing shared data in the library. Without thread-safe run-time libraries, programmers would have to add code to their software manually in order to regulate access to run-time library functions. Besides being time-consuming, this process is error-prone.
Compilers also play a critical role in multithreaded applications. The two most widely used C++ compilers for Windows NT are Microsoft Visual C++ and Borland C++. Both of these compiler packages include a full complement of tools for writing 32-bit multithreaded applications under Windows NT.
While Windows NT is a popular choice for high-end embedded systems, it is not a general-purpose X86 real-time operating system. The problem is very simple: Windows NT comes in only one size-Extra Large. It is not scalable to run on a range of X86 hardware, but requires a Pentium-based PC with at least 16 Mbytes of RAM, a hard disk and CRT display. Windows NT cannot run on smaller systems and will not fit in ROM. Given that the entry-level costs for PC hardware is about $2,000, Windows NT is restricted for use in rather expensive embedded systems.
One solution to Windows NT’s size problems is to build a real-time kernel based on the Win32 standards. Win32 API-based kernels can use Windows compilers such as Visual C++ and Borland C++.
In addition, they can be designed from the ground up to meet the requirements of an embedded system.
For example, Win32 API-based kernels can be designed to be scalable and to fit into ROM, and can have a deterministic scheduler.
A number of Win32 API-based real-time kernels already exist. One of them, from Microsoft, is a specialized kernel for cable-TV digital set-top boxes. By using a Win32-based kernel, Microsoft engineers didn’t have to create new tools for writing interactive TV applications. They were able to use existing Windows NT development tools, such as Visual C++. In addition, following Win32 standards makes it easier to integrate set-top boxes with video servers running Windows NT.
Another real-time kernel based on the Win32 standards is Phar Lap’s ETS Kernel, a general-purpose real-time kernel.
Because the ETS Kernel uses a subset of the Kernel32 API, it is able to work with the Microsoft and Borland C++ compilers, debuggers and thread-safe run-time libraries. And with its small memory footprint, the ETS Kernel may be used in embedded-system designs for which Windows NT would be too big.
Windows NT can also provide a prototyping environment for embedded applications, allowing developers to use a common set of Win32-based tools for both host and target. Prototyped code can move from the Windows NT host to the Win32-based target simply by relinking.
Given the hardware/software infrastructure that has developed to support the X86 architecture in the desktop world, X86 has become the safe choice for embedded systems. With the introduction and acceptance of the Win32 API, the architecture now has the first widely supported 32-bit real-time software standard. Developers can be assured that the X86 architecture will be well-supported by hardware and software vendors for at least the next 10 to 15 years.
When looking for professional server solutions, particularly RAID arrays, enterprises continue to choose in-house brands, or generic RAID servers, citing great cost reductions. This has led lower end manufacturer Acer to the number one spot in a recent server sales poll.
Recovery continues to be important to RAID server sales.
HP was second, cited by 20 percent of resellers, followed by The IBM PC Co., Somers, N.Y., with a 10 percent share. Hewlett-Packard Co., Palo Alto, Calif., with 5 percent, and Digital Equipment Corp., Maynard, Mass., with 4 percent, rounded out the top five best-selling list.
Some resellers are still more comfortable recommending major manufacturers’ servers than building their own.
“HP has a strong service and support organization; and their ProLiant server line is very dependable when it comes to mission-critical applications, which form the majority of our sales,” said Tony Audus, director of purchasing for Technology Partners Inc., a reseller in Ann Arbor, Mich.
CRN instituted coverage of servers for the first time last month. In all, 170 resellers responded to the survey.
Looking at desktops, clone or in-house systems maintained their sizable lead in the best-selling category, cited by 39 percent of resellers. Acer, San Jose, Calif., captured the top spot among major manufacturers for the first time since July 2010, with 10 percent of reseller votes. HP was next with 8 percent.
Additional data will show if Acer can maintain its new-found lead. When Compaq slipped last year, it quickly recovered and recaptured first place among major manufacturers after only one month.
Both companies, however, need to be looking over their shoulders, because HP is right on their heels. The company captured 7 percent of reseller votes in March, the highest percentage recorded since the survey began.
Moreover, shortages of HP desktops soared last month, indicating the company would have done considerably better if it had been able to supply enough units into the channel. Some 23 percent of resellers indicated these systems were in short supply, almost four times the percentage in the previous survey. In comparison, during the past six months on average, only 8 percent of resellers cited shortages of HP desktops.
Some resellers, however, think that this shortage situation is confined to the higher end of the desktop market.
“My experience is that most of the HP shortage problem is at the high end, where brand names make a difference to my clients,” said Compu Pro Systems’ Patton.
“At the lower end, in-house and clone systems fit the bill nicely, especially for my small-business customers that are not willing to spend the extra money to buy a major manufacturer’s system,” he said.
But even so, the ability to remedy this product shortage will be crucial to HP’s future success, because survey results indicate fewer resellers are reporting shortages of Compaq, IBM, and, especially, Acer systems.
Twenty percent of resellers indicated Compaq desktops were in short supply, 13 percent cited IBM, and only 3 percent cited Acer.
Some resellers already are switching customers to these rival manufacturers.
If HP can overcome these problems, CRN believes the company has a good shot at capturing the No. 1 best-selling spot among major manufacturers over the next few months.
Turning to notebooks, Toshiba, Irvine, Calif., maintained its secure hold on first place in the best-selling category. Twenty-six percent of resellers gave Toshiba the nod, up from 21 percent last month. IBM came in second with 17 percent, some 6 points higher. Compaq regained its place in the top three with 9 percent of reseller votes, compared with 5 percent in the previous survey.
Toshiba’s market share increase came even as its shortage problems began to re-intensify. Some 32 percent of resellers indicate Toshiba systems were in short supply, up from 24 percent in February and 26 percent in January.
“The supply situation with Toshiba notebooks is very depressing, and it is costing us sales,” said Technology Partners’ Audus. “Basically every popular model is unavailable, and we have received no word from Toshiba when the situation might ease up.”
But more than one reseller says Toshiba notebooks can be had for a price through the gray market.
“You will pay more for them, which cuts into your margins, but at least you can service your customers,” said Lee Eikov, president of Faceted Information Systems Inc., a reseller in Stroudsburg, Pa.
So far, Toshiba has been able to weather these problems and maintain its place as the best selling notebook manufacturer. But if this situation continues, it eventually will affect Toshiba’s ability to maintain and increase its market share in the longer run.
Other survey results show that more resellers are anticipating a slowdown in sales growth over the next three months compared with the previous three months.
Each month the survey is mailed to approximately 1,450 resellers randomly selected from a listing of CRN subscribers, and to an additional 550 resellers who have agreed to ongoing participation in the survey. The responses are then tabulated to produce the survey results for each month. In March, 170 responses were received.
In addition, the surveys from each month are combined to form three-month and six-month moving averages, providing information on trends in reseller responses over time. The statistical accuracy of these averages is higher than the monthly figures because they are based on a larger number of surveys. Moreover, moving averages are less susceptible to the unavoidable statistical biases that may enter into the results from any single month.
Is this the year that clusters of Intel Corp.-based servers start populating corporate sites? Not likely, despite the pending release of Microsoft Corp.’s Wolfpack software for Windows NT.
Beginning this spring, several server and storage manufacturers will release cluster-ready systems based on Wolfpack, which will deliver high-availability features between two server nodes.
But IT managers probably won’t sink their teeth into Wolfpack until 1998. The reasons? Many sites are still kicking the tires on NT as a mission-critical operating system and certainly aren’t willing to bet their businesses on a first-generation product such as Wolfpack. Secondly, customers have other options for adding high-availability features to their network servers. And finally, many sites haven’t been able to sift through all the hype about clustering technology to decide if it’s a worthwhile investment.
“Right now we don’t know a lot about clustering and the benefits it gives us,” said Dan Hendrickson, MIS director at Pittencrieff Communications Inc., in Abilene, Texas, which has added several NT servers over the last year as it ramps up a new Internet service provider business. “We use NT primarily because the database and tools are cost-effective. But as far as clustering [goes], I’m not sure where it fits in.”
Despite customers’ hesitation, vendors are still rushing to get their Wolfpack products out the door, with their early targets being World Wide Web server sharing.
“Clustering is critical for a Web server,” said John Young, director of product marketing and business operations at Compaq Computer Corp.’s Server Products Division, in Houston. “[On the Internet], performance is not an issue, but absolute availability is.”
Survival of the fittest
But there are options other than Wolfpack for providing high availability, which reduces server downtime, and scalability, which adds more processors for more power. In many cases, these options–in both NT and Unix environments–are more established.
Digital Equipment Corp. had clustering on its VAX systems over a decade ago. Since then, Unix-based clusters from Digital, NCR Corp., Hewlett-Packard Co., Tandem Computers Inc. and IBM have emerged.
Unix clusters couple two or more servers to back each other up in the case of a hardware failure. Operating as a single system image, several machines can be managed as one. As a result, cluster-enabled applications can run across separate server nodes while the clustering software evenly distributes the processing power behind one application, a feature known as load balancing.
Wolfpack won’t get to that level of performance until the second generation debuts next year.
Also on the Unix side, ccNUMA (cache-coherent nonuniform memory access) architectures provide symmetrical multiprocessing scalability to hundreds of processors. Applications benefit from the same single-system-image and load-balancing characteristics of a cluster, but without modification to the software.
Sequent Computers Inc. and Data General Corp. have ccNUMA designs for both Unix and NT. However, the processor scalability and shared-memory design go against the grain of NT, a shared-nothing operating environment scalable only to eight processors. As a result, NUMA is making major inroads in Unix environments, leaving the door open for Wolfpack on the NT side.
But Wolfpack also faces a handful of NT failover solutions from Octopus Technologies Inc., Vinca Corp., Network Integrity Inc., Veritas Software Corp., Sequoia Systems Inc. and others.
Wolfpack is similar to these products in that they all provide a way to eliminate downtime in the event of a single server failure. But where each is dependent upon different techniques that may require specific hardware and software configurations, Wolfpack promises hardware and application independence.
Therefore, these third-party companies will have to evolve their products as Wolfpack steps into their space with standards-based technology.
Vinca, for example, plans to add back-end mirroring to the storage portion of Wolfpack clusters. This feature will provide a speed advantage and eliminate the possibility of data loss when a server is down.
Octopus plans to ship by the end of the year a fault-tolerance clustering option for Wolfpack. With this feature, an offsite cluster could act as the backup for an on-site cluster in the event that an entire network goes down.
“The reason they are helping with [Wolfpack] is that they don’t want the same vendor lock-in that the Unix customer [faces]. And they want a wide range of application availability,” said Mark Wood, Microsoft’s Wolfpack product manager.
That’s the same reason IBM is in the midst of establishing a common set of APIs for Unix clustering based on technology dubbed Phoenix. Phoenix will be available on IBM’s Intel-based PC Servers in the next few months, and “the intent is to work with Microsoft on Wolfpack,” said Bill O’Leary, an IBM spokesman in Somers, N.Y.
Wolfpack compatibility will establish a crossover between NT and Unix for heterogeneous clusters, but IBM is still hammering out the issues of how applications can talk to two sets of APIs.
IBM and developers of fault-tolerant software admit it makes better sense to comply, rather than compete, with Wolfpack because Microsoft is lining up a long list of application and hardware support.
The Redmond, Wash., company has provided its Wolfpack software development kit to more than 50 application vendors. Moreover, it is being developed on top of standards such as SCSI-2 for attaching multiple servers to a SCSI-based storage subsystem.
Vendors can opt to connect servers and subsystems via a high-speed interconnect, such as Tandem’s ServerNet. Such a connection will enable load balancing in phase one of Wolfpack, as long as the application is developed specifically for that task. Microsoft will deliver application scalability across a cluster in its SQL Server database next year, and Tandem’s ServerWare SQL will scale in April, said officials from both companies.
Application control is one area that piques IT managers’ interest, since that’s one function many fault-tolerant packages have not delivered.
Current fault-tolerant packages “protect you against hardware issues. But if clustering offers a more reliable software platform, then it will be interesting,” said David Blanchard, decision support analyst at Quaker Oats Co., in Chicago.
Leaders of the pack
The first companies to deliver a Wolfpack cluster based on servers, interconnect and storage are six system manufacturers that are working with Microsoft on the technology: Compaq, Digital, HP, IBM, NCR and Tandem. Microsoft’s seventh partner, Intel Corp., is tweaking its Pentium Pro SHV board for use in a cluster.
These OEMs promise that customers will have an upgrade path to the clustering technology that doesn’t require the purchase of a turnkey solution.
“If someone has already purchased a server configuration that we certify [for Wolfpack], then all they need to do is buy a second server and the equipment to make a cluster environment,” said Christophe Jacquet, clustering product manager at HP’s NetServer Division, in Santa Clara, Calif.
Upgrades may take the form of a clustering kit, much like Digital provided with its own NT clustering software, released last May, that had a price tag of about $1,000 per server.
With so many different clustering efforts, some customers–while sold on the technology–aren’t willing to commit yet to one solution.
“We will have full migration to NT clusters in the next 36 months, depending upon what the vendors provide. But I want to play it as safe as I possibly can,” said David Forfia, manager of information technology services for the Electric Utility Department of the City of Austin, Texas, which uses VAX VMS clusters and mirrors Microsoft’s Exchange Server on NT with Octopus technology. “Nothing I can do with clustering now on NT is going to make or break my company. I’m keeping all options open.”
Related article: Wolfpack: A Two-Phase Project
First focus is on high availability; load balancing, other performance improvements expected in second phase
In the first phase of Microsoft Corp.’s Wolfpack, the high-availability benefits of a two-node cluster is the primary focus. The second phase, still more than a year off, will go beyond the two-node barrier and start to address some of the performance potential of clustering with Windows NT.
New load-balancing features will dominate the second release, tying Wolfpack’s failover process to server utilization thresholds that can be set by the network administrator.
Clustering servers together requires dedicated, high-speed links between servers. These interconnections handle heartbeat signaling, or continuous verification of each server’s operational status. They also are used for data transfers within clusters.
In the two-node server cluster arrangement, each server acts as the failover partner of the other. If one server crashes, the second server will instinctively stand in for the lost resource until it can be restored to service.
This creates a high-availability environment that is viewed as a single computing resource. The single-system view of the server cluster makes hardware failures transparent from the client perspective.
One administrative benefit of Wolfpack is the rolling upgrade, which allows software and hardware updates to be performed on one system at a time, without a loss of service. In fact, the terms “primary” and “secondary” servers don’t apply in server clusters because all server resources are equal.
Managing the failover process requires close ties between servers and applications. The Wolfpack APIs are making clustering, traditionally a proprietary technology, into a more broadly supported approach. Using these APIs, developers have started coding cluster-aware applications that will operate on Wolfpack-compliant hardware platforms, such as disk-drive controllers and storage arrays, to be certified by Microsoft.
Since the APIs were issued in November, by the time the second phase rolls out, even more cluster-aware applications will be available.
(Apple Network Servers were a short-lived technology, but the geeks that still have them swear by them.)
Apple’s Network Servers are aimed at corporate users who want the power and flexibility of Unix without the grief of the power and flexibility of Unix. The Network Server hits this target and is also worth a look for those expanding an installed base of AIX or other Unix systems.
The Network Server comes in two different models, Network Server 500 and Network Server 700, with several configurations of each. I tested the Network Server 700/200, with a 200-MHz PowerPC 604, 64MB of RAM, two 4GB disks, and an eight-speed CD-ROM. Weighing in at a cool $18,367, this machine is not for everyone, but Apple offers a variety of configurations starting at about $9,768 for the Network Server 500/132, a 132-MHz version with fewer expansion options.
Both the 500 and the 700 offer seven trays accessible from the front that can be filled with hot-swappable media. The Network Server 700 will support two internally mounted disks. The cooling fans are hot pluggable; you can also get redundant hot-swappable power supplies as an option on the 700.
Simply put, the hardware is a pleasure to work with. The system is on rollers, or it can be rack mounted. A locking, sliding door covers the drives and the power switch during normal operation.
The system board, with the CPU card, memory, and expansion slots, slides out of the back for maintenance access. The only hardware issues I found were the difficult-to-read LCD on the front of the machine and the lack of a three-button mouse (you can buy one separately from Apple).
The operating system is equally enjoyable to work with. By Unix standards, installing AIX 4.1 is a snap, and Apple takes this one step further with easy-to-follow documentation. When the machine reboots after the second phase of the installation, you have a fully configured Unix system.
The system is binary compatible with all RS/6000 family software, but Apple has taken the extra step of certifying the hardware with several key software vendors, including Oracle, Informix, Lotus, and Netscape.
Of course, administering Unix is not trivial. The fact that it looks great and has that friendly little apple on the front does not mean that the issues that Unix brings to an environment go away.
Even on this user-friendly machine, it is important to understand the security, networking, performance, and storage-management concerns associated with Unix.
Fortunately, Apple and IBM have made dealing with these issues as easy as possible. Apple’s documentation gets you up and running quickly, with pointers to IBM’s thorough online documentation, InfoExplorer.
In addition, multiple administration tools provided by IBM and Apple make system administration much easier.
One of the most interesting tools is the Disk Management Utility created by Apple for the Apple Network Server series. The Disk Management Utility lets you remotely configure the storage on the Apple Network Server from a Macintosh. It also will enable you to ensure that you won’t have to consider RAID server recovery, at least as long as you backup the drives. It also communicates with the Apple Network Server via AppleTalk instead of TCP/IP.
IBM’s contribution to system management includes the Visual System Management tool. This icon-based utility provides drag-and-drop access to administrative tasks such as adding users and managing storage.
All that said, I recommend that you learn to use SMIT, the System Management Interface Tool. It is a time-tested AIX administration tool that provides both text- and GUI-based interfaces and includes the most comprehensive set of administrative tasks of the three tools provided with the system. Plus, SMIT allows you to easily display the command it is about to execute, allowing you to verify what you are doing as well as making it a great learning tool.
Apple has unleashed a powerhouse on the workgroup server market. It is not a cheap solution, especially when you tack $1,489 to purchase the AIX license and $1,399 to upgrade that license from two users to unlimited users onto the price of the hardware.
However, if you need the reliability of hot-swappable components and you value the quality that you get from Apple, this machine is a winner.