Choosing Server RAM

Posted by lucas On May - 2 - 2013 ADD COMMENTS

The difference between server RAM and desktop RAM
Lucas Moore - Hostirian Support

One question I'm regularly asked is, "What's the difference between the memory in my server, and the memory in my desktop PC?" Whether you're a system administrator, or a hosting customer who just wants more insight into the hardware behind the service they're buying, a little look into this topic can help you make informed decisions and give you peace of mind.

The main difference between the RAM in a server and the RAM in a desktop comes down to reliability. The components on a server RAM module are 'binned' (that is, they're selected during the manufacturing process) for higher quality. Each component, from the individual DRAM chips, to the resistors and printed circuit board, to finally the entire module, is held to a higher standard, and tested much more intensively than the average desktop RAM module. In desktop RAM, price and then speed are the qualities selected for. This is why you'll notice server RAM is generally much more expensive than its desktop counterpart. Clock-for-clock it also performs more slowly. In other words, PC2-5300 server RAM will support fewer memory operations per second than desktop PC2-5300 RAM.

This price and performance difference is partially just due to the aforementioned selection of components: It takes more time and effort to do the additional selection and testing, thus the higher cost; then by running those higher quality components at lower speeds, they are less stressed, further increasing reliability.

There are other things at work here, though. In the name of reliability, server RAM almost always employs some additional technologies. The most important ones are Registered/Buffered memory, and ECC (Error Correcting Code).

Registered Memory
Registered memory uses a device called a 'hardware register', that acts as a buffer between the memory module and the system's memory controller (hence the alternate term 'buffered memory'). This extra component takes some of the electrical load off of the memory controller, which makes the system more stable while allowing for greater amounts of RAM. In many cases, registered memory will be slower than regular desktop memory because of this extra component and the step it adds in working with RAM operations: RAM -> Register -> Memory Controller, instead of RAM -> Memory Controller. In the picture below, you can see a regular DDR2 desktop DIMM (top) and then a DDR2 ECC registered server DIMM (bottom). Note the extra components in the middle of the server DIMM; this is a ready indicator that you're looking at a registered/buffered memory module. These extra components, along with lower production volumes, help account for the added cost of registered memory.

 

ECC Memory
ECC stands for 'Error Correcting Code'. Memory with this technology, when used with a system that supports it can literally correct errors that occur in the data held in RAM. These errors can be caused by anything from cosmic rays to voltage fluctuations in the server's power supply. An error in a single bit could cause your server to crash, or worse still, could corrupt your essential data. A server that supports ECC uses memory that contains extra bits (often called check-bits); the server's memory controller can use these bits to both detect errors and correct errors by doing a series of mathematical operations to compare the data contained in RAM with its theoretical expected result. As a server customer, the main point to take home is that ECC helps your server stay stable and your data remain accurate. Much like registered memory, the components necessary for ECC add some cost and performance penalty, in exchange for greatly enhanced reliability.

At Hostirian, we are always ready and available to help you find a server solution that fits your needs.

For more information, see the following articles:
http://tirasoft.blogspot.com/2010/03/difference-between-server-ram-and.html
http://en.wikipedia.org/wiki/Registered_DIMM
http://en.wikipedia.org/wiki/ECC_memory


When choosing colocation or data center services for your organization, there are a multitude of options. One of the most important things to consider is a data center with SAS 70 Type II compliance.

What is SAS 70 Compliance?
The most widely accepted auditing standard for service organizations is Statement on Auditing Standards (SAS) No. 70. It was developed by the American Institute of Certified Public Accountants and certifies the functionality and security of the control objectives and activities of a service organization.

SAS 70 Type II compliance is particularly important for auditing controls in information technology. Colocation and data centers may house some of your organization's most sensitive information.

Any data center you trust with your data hosting, infrastructure or other data services should be SAS 70 Type II compliant. Here are three reasons why:

1. Differentiate the Top Notch Data Centers
With the overwhelming options in data centers and colocation for your business, the first way to separate the great from sub-par providers, is to look for SAS 70 Type II compliance.

This means the organization has had its control policies evaluated any tested by an independent, 3rd party auditor. SAS 70 is the authoritative mark disclosing control activities and processes to you, the customer.

As for data facilities without SAS 70 compliance, you cannot be sure their control processes will meet your standards, or the standards of your auditors. This easily allows you to narrow down your colocation options.

2. Rest Easy, Your Data Is Secure
Your data is important. There should be no better reason for selecting a data or colocation center with SAS 70 type II compliance. The controls and processes in place can seriously affect the security of your information.

By selecting a provider that is SAS 70 Type II compliant, you have independently verified documentation of those processes and their effectiveness. When your data is part of the equation, make sure it is not in the hands of an unverified data center.

3. Save Time and Money on Audits
When it comes time for your organization to be audited, the process can be made much simpler by working with a colocation or data facility that is SAS 70 Type II compliant.

If your data center is SAS 70 compliant, they can provide a Service Auditor's Report directly to your auditor. This will be an immense advantage to you and your auditor. Without SAS 70 compliance from your data facility, you may end up covering the additional cost of sending your auditor to that service organization.

In the end, both time and money will be saved during the auditing process. By selecting a data or colocation center that is already SAS 70 Type II compliant, both you and your auditor's jobs will be made easier.

When weighing your options in the selection of a colocation or data center, look for SAS 70 Type II compliance. Any service provider that is compliant will be able to provide you with documentation of compliancy.

Ryan McSparran is a freelance business writer. Ryan writes about various businesses including Latisys.com and covers topics related to business technology, including Chicago data centers and IT security issues.


Article from articlesbase.com

Why You need AT&T Data Center?

Posted by admin On December - 30 - 2010 ADD COMMENTS

A data center is a facility that is made up of a reliable power system and ventilation. The balance between these two must be maintained in order to ensure that important services are continuously provided. The demands of businesses are significantly increasing and additional equipments must be bought and added to the system for more features and service. But with additional equipments, the space required for a data center will be consumed and complications in terms of technicality may occur. AT&T data centers provide services and features like any data centers but without the worry of space consumption, equipment upgrades and the machine's maintenance. AT&T data centers are equipped with modern and advanced machines that will provide any company with their business needs. AT&T data centers are always evaluated and monitored by a team of IT specialists to make sure that the machines are performing as efficient as they could be.

AT&T data centers offer its clients and customers with business solutions services that are very critical for their business. Their applications are complete with the necessary tools needed for network management and are very capable of handling any situation, from the easiest up to the most complex circumstances. The streaming of media components that are necessary for Internet based businesses are executed without any network delay and lags. A global platform for network management enables users to monitor and manage their website applications with efficiency and precision are always up and running, providing customers uninterrupted access to their servers. In addition to that, the safety of business data and files are always secured and the quick recovery of lost files and data is also a valuable service that AT&T data centers provide its customers should any unanticipated and accidental errors occur.

When building a plan for making, staffing, supplying and managing a Data Center, there is a lot to consider. These things include how your products and your company have impact on the environment and what some of those impact have for your customers. The answer depends on the type of consumer and data center. Most service providers have little chances to attain higher standards of efficiency or to decrease impact on the environment. In today's supply controlled market, which still remains as it is. Though there are many marketing benefits for humble improvements. For the single-resident sites, the benefits of higher efficiency and company benefits of reducing environmental impact on the community can be extensive.

AT&T data centers worldwide Internet Data Centers assures continuous operation of applications for the international companies, delivering a dependable local service from New York to Hong Kong. This extends from procedures to consumption of energy, AT&T data centers is dedicated to efficient service as much as it is to protect critical information resources of its customers.

 

Author is an AT&T Master Solution Provider from Digital Management Solutions who specializes in helping customers make the most out of their communication and network needs. He works tirelessly to provide powerful, efficient and cost effective solutions, such as Datacenters and Opteman to address clients' communications needs.

 


Article from articlesbase.com

The Three Fundamentals of Data Center Security

Posted by admin On December - 30 - 2010 ADD COMMENTS

As data security becomes critical for more companies, many of those companies are turning to off-site data centers to securely house their critical systems.

While data facilities in general provide an increased level of security for a company's sensitive data, not all providers are equal in their security standards. Some offer different types of security measures and some are simply better than others.

When searching for an off-site data center for your business, it is important to find a provider that fits the needs of your organization. The security needs of every company are different and finding a center tailored to yours will ensure the best security at the best possible cost.

While security needs differ from company to company, there are three things every organization should look for in a data center. While providers may offer a list of other security options, these are the three fundamentals of data security.

1. Multiple Locations
When it comes to client data or other sensitive data, a data storage facility with multiple locations across the country will offer some positive advantages.

The most obvious benefit is redundancy. With multiple secure locations, an extra layer of security exists, protecting your data. Disaster recovery becomes an easy issue when data is replicated in more than one location.

2. SAS 70 Type II Compliance
SAS 70 Type II compliance is the most widely recognized auditing standard for colocation and data storage centers. This has become almost mandatory for any company looking to house their data off-site.

SAS 70 is a third-party audit of the data facility, giving you measurable assurance that security procedures are truly being followed and adhered to. With this stamp of approval, there is no guessing at how secure your data center really is. When it comes to trusting your critical data with a data center supplier, this is a no-brainer.

3. Secure Facility
The security of your data is only as good as the security of the facility it is housed in. When researching data centers, take a close look at the security features available within the physical premises.

A secure facility should restrict access throughout the building with multi-layer access requirements. This may include things like fingerprint scanners and secure key-card authentication.

In addition, a data center should include camera surveillance and on-site personnel 24/7. Support should be available at all times and systems continuously monitored. This allows for immediate response in an emergency situation.

Finally, a secure facility should include proper power backup with generators or batteries for uninterrupted uptime. Advanced cooling systems should also be in place to prevent any hardware issues.

When it comes to your organization's critical data, do the research to be sure it's in safe hands. Many data centers offer a suite of premium security features at different pricing levels. You will need to decide what features make the most sense for your business. But before considering any additional features, start by verifying that all data center providers on your due-diligence list meet these fundamental requirements.

Ryan McSparran is a Colorado-based business writer covering topics that affect organizations along the Front-Range. Ryan's work includes organizational development and Denver data center issues.


Article from articlesbase.com


DATA center is a place where library of data is located, handled and disseminated. It is a facility where computer systems and associated components, just like telecommunications and storage systems are stored. A place equipped with repetitive or back up power supplies, repetitive data communications connections, environmental controls just like air conditioning, fire suppression) and security devices.

Data center or DataCenter (or datacentre) may also be referred to as a "server room" or possibly a computer closet, a server farm. In essence, data center could be synonymous with network operations center (NOC), a restricted entry area made up of automated systems that consistently keep track of server activity, website traffic, and network efficiency.

The start of data centers traces back to the early ages of computing industry where computing systems were tough to perform and maintain hence demanded an exclusive atmosphere.

Several cables were important to connect all of the components and solutions to accommodate and manage these were devised, like standard shelves to mount equipment, raised flooring, and cable trays (set up overhead or maybe underneath the elevated floor). As well, previous computer systems needed a great deal of power, and had to be cooled to avoid overheating. Protection was crucial - computers were high-priced, and were often used for military reasons.

Around 1980, microcomputer industry started to boom and computers started to be deployed just about everywhere, in many cases with little or no care about operating requirements. On the other hand, as information technology (IT) operations begun to expand in complexity, companies grew aware about the need to manage It resources. With the advent of client-server computing, during the 1990s, microcomputers (now named "servers") began to find their places in the old computer rooms.

The availability of inexpensive networking equipment, along with new requirements for network cabling, caused it to be achievable to utilize a hierarchical design which place the servers in a specific room inside the company. The utilization of the term "data center," as applied to specifically designed computer rooms, begun to gain popular recognition at present.

The increase of data centers emerged during the dot-com bubble. Companies desired rapid Internet connectivity and nonstop operation to utilize systems and establish a presence on the Internet. Setting up such equipment was not viable for many smaller companies.

Many companies begun creating very large facilities, known as Internet data centers (IDCs), which provide businesses with a selection of solutions for systems deployment and also operation. Brand new technologies and procedures were created to take care of the scale as well as the operational needs of such large-scale operations. These procedures gradually migrated toward the private data centers, and were implemented mainly because of their sensible outcomes.

As of 2007, data center design, construction, and operation is a well-known discipline. Standard documents from certified professional groups, like the Telecommunications Industry Association, establish the prerequisites for data center design.

Data center is categorized into four "tier" levels.

The simplest is a Tier 1 data center, which is fundamentally a server room, pursuing primary guidelines for the installation of computer systems. Essentially the most strict level is a Tier 4 data center, that is made to host mission critical computer systems, with completely redundant subsystems and compartmentalized security zones controlled by biometric access controls procedures. An additional consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling demands.

Author is an AT&T master solution provider who specializes in helping customers make the most out of their communication and network needs. He works tirelessly to provide powerful, efficient and cost effective solutions, such as <a href ="http://www.dmsstl.com/">Data Center</a> and <a href ="http://www.dmsstl.com/index.php?page=making-the-transition-to-mlps">mpls services</a>, to address clients' communications needs.

 


Article from articlesbase.com

Preparing for the Future Data Center

Posted by admin On December - 29 - 2010 1 COMMENT

The average Joe does not know that when he withdraws money from a local ATM machine, he is initiating a transaction that will update his profile at his bank's main data center. Sometime later, that same transaction will be copied over to another data center to ensure that the bank adequately protects Joe's information, which is a critical bank asset.

The data center houses the most expensive technologies known to the organization. Companies spend tons of money to build data centers that are equipped with the latest in security and built to withstand disasters. This is because the business is dependent on applications that run from the data center. E-mail, data warehouse, billing systems, financial systems, ERP and CRM systems are housed in these near bulletproof rooms.

"Your typical data center is set up to provide a controlled, secure environment for computer equipment. There is a lot of expensive infrastructure set up to enable everything that happens in the production area (computer room) to run smoothly. This includes power systems, environmental controls and security," says Shane Gaskin, manager of the Unisys' Asia Pacific Managed Services Command Center.

The problem today is that the equipment stored in the data center is evolving rapidly. We are seeing faster servers, higher capacity storage systems, intelligent network equipment designed to thwart the latest in hacking best practices. Users are also moving farther away from headquarters taking with them critical applications that require the same quality of service delivered to the branch as that available at HQ.

"We call this real-time infrastructure (RTI) with what drives it is software embedded with business information policy," said Mark Feverston, vice president of enterprise servers and storage solutions at Unisys. "RTI allocates computing resources directly and dynamically to business customers' strategic business processes. The operating environment must be able to share application workloads dynamically and automatically optimize resource usage."

According to Charlie Bess, EDS Fellow, "in the future, the forces of standardization, commoditization and virtualization will drive down the cost of the data center and reduce the time to get a configuration online, providing greater capability through the dynamic assembly of lower cost processors into massive networks of computing capability. Until now data center complexity was managed using the most flexible tool available - people. The data center environment of the future will be far too complex for an individual to comprehend. Industrial techniques that have been applied to process manufacturers will be brought to bear."

Nicholas Gall of the Gartner Group includes innovation as another driving force in the evolution of the data center. "The rate of change of the integrated systems and their interconnections is accelerating," notes Gall. The changes are occurring at the network topology, software configuration, and application integration. We are witnessing a long-term shift in scope from assembly of standard systems based on standard components that comply with standard designs to assembling and configuring standard data centers out of virtual components based on standard designs."

Dr TC Tan, a Distinguished Member of the SYSTIMAX Labs notes that current design parameters for data centers will not change significantly. "A data center is built to provide service, and as such simplicity, flexibility, scalability, availability, reliability, efficiency, security, location and environmental controls will continue to be concerns for data center designers in the future. Bandwidth hogs like virtualized systems will require robust and high performance cabling infrastructure."

EDS Fellow Dr Rene Aerdts believes that, "The future data center will require power than we can only imagine today." Aerdts predicts that some data centers will need to be built near a water source to provide cooling." Today's nuclear reactors of today are water cooled and are thus built near a water reservoir or river.

Present-day remote control technology will also mean that one day we don't need to be physically present at a data center to do routine maintenance. We will have 'lights out' data centers where the only time you need to power on the lights is when you have someone coming over to perform a physical job. Everything else will be done remotely or through robots that are wirelessly connected via the Internet.

The future data center will use historical data to perform real-time assembly of modeled configurations. "These same parameters will feed into autonomous monitors, simplifying the management process into something that people can understand," hints Bess.

In essence, "The data center of the future becomes more of an enterprise value network center, pulling together its resources, adding value to organizations through its flexible and powerful capabilities. Its limits will be on our understanding of how to gain benefit from it. Where the data center is located and who owns the hardware within it will be of less concern than the advantage it provides to the enterprise. Even with all this automation, it will be the people and their understanding of the business that will make the difference."

The data center of the future will be designed for a service-driven organization supporting the business environment. Provisioning of resources will be done in real-time using a combination of pre-determined policies and algorithms that allow for some level of decision making following set parameters -- the precursor of artificial intelligence.

"As a result, automated, virtualized, operational services (i.e., design, assembly, provisioning, monitoring, change management) become essential for managing the complex dynamic relationships among the components," predicts Gall.

"Given that technology is continuously improving coupled with the increasing speed of change in the business environment, predicting what will happen in 10 years' time is a tall order. No one can be certain that some of today's best practices will be inadequate in 5 years' time. Hence, IT organizations must approach planning, implementing, measuring, and improving their high-availability systems as a continuous process," advises Tan.

Well said.

Jose Allan Tan is a technologist-market observer based in Asia. A former marketing director for a storage vendor, he is today director of web strategy and content director for Questex Asia Ltd. He also served as senior industry analyst for Dataquest/Gartner and was at one time an account director for a regional PR agency.


Article from articlesbase.com