Some 2.5 billion PCs, tablets, ultramobiles, and mobile phones will ship worldwide in 2014, Gartner projects. And as a result, the typical American worker has more device options today than ever before. Workers combine desktops, laptops, tablets, and smartphones to work more quickly, efficiently, and effectively. But what if employees could only choose one of the four? Which device would they want most?
SHI recently commissioned an online survey conducted by Harris Poll among more than 1,000 U.S. employees (aged 18+) to see what device today’s employees would choose if their employer offered to buy them only one for work: a desktop, laptop, tablet, or smartphone. The results revealed a surprising range of preferences, including unexpected distinctions by age, geographical region, and more. Here are the highlights: (more…)
SHI hosted a two-day IT Asset Management (ITAM) Summit in New Jersey last week, attracting 102 IT and procurement executives from 77 companies, ranging in size from less than $500 million in revenue to more than $50 billion. Having joined SHI just six months ago after holding CFO roles at both public and private companies, it has been interesting to learn so much about the challenges organizations have keeping track of their hardware and software assets and what we can do to help them.
These challenges are a concern not only to IT and procurement heads, but to those who manage internal controls and seek to optimize return on investment as well, such as CFOs, risk managers, and internal audit heads. When you consider how difficult it can be to control cloud and SaaS-based app usage and company data, the situation gets even worse. It’s no wonder that most of these leaders are losing sleep over ITAM issues.
To get a sense of the attending organizations’ approach to ITAM, we asked our audience to respond to a series of statements about asset management, risk, software audits, the cloud, and control over IT. Here are just a few of the things we learned: (more…)
System administrators have a full plate. Maintenance, monitoring, and management of their organization’s IT infrastructure keep them busy, leaving few opportunities to complete a thorough network design. A poorly designed or unorganized network, however, often requires more attention, and can be costlier down the road, making it worth the time investment up front.
If you have an opportunity to address the key requirements of your network infrastructure and organization as a whole, administration becomes easier in the long run. Here are three major steps for approaching network design to put you on the right path.
Gather initial network requirements
- Know your network. How many users connect to your network — 100 or 10,000? Do you have enough bandwidth to support those users? What kind of traffic profile are you looking at? Understanding the traffic on your network will help you make decisions down the road in terms of capacity and what protocols need to be addressed.
- Understand your organization’s expectations. What are the requirements for overall uptime for the network? Does your organization need three nines? Five nines? It doesn’t have to be exact, but you want to have an idea of what your business expects so you can design a network to support that. If your organization doesn’t require anything beyond two nines, putting in secondary power supplies would be foolish. At the same time, if your organization requires five nines, then not having the second power supply is just as foolish.
- Determine the budget available and how that fits your requirements. The right products for your network depend heavily on budget. The perfect network switch might be too expensive for some organizations, but they still need to choose the right switch family or product line. The switch might need to support certain features, like dual power supplies and layer 3 or light layer 3 protocols. It might need to do some kind of inter-VLAN routing, and should provide a command-line and web-based interface. If these requirements are missed initially, it’s almost impossible to get them later on after the purchase.
The perfect network switch is elusive. Organizations want a cost-effective option, as well as one that supports the majority of protocols, from RFCs to IEEE 802.1 standards. But the perfect switch must also be smart, simple to implement and maintain, and primed to adapt to new technology, all of which are more difficult to find. It must also be capable of providing an overall data center network topology that solves the organization’s need.
Here are five aspects of the perfect network switch that avoid the common challenges that network administrators typically deal with when implementing and maintaining network switches.
- Uses a command-line interface you’re used to. Most admins don’t want to relearn another command-line interface. The perfect switch offers an interface that’s similar enough to what you’re using and doesn’t require you to relearn anything. A gentle learning curve is an important requirement.
- Has a flexible product portfolio. Work with vendors that offer not one solution, but many different types of solutions. Organizations have different requirements based on what they’re trying to do with their network — cloud switches are different from data center switches and workgroup or office switches, and each type of switch has different needs. By working with a vendor that offers a variety of switches within a product group — from 1GB to 100GB — organizations can more easily support current and future requirements.
- Supports future technologies. You must identify new ways to connect switches to take advantage of all the connections they can support. For example, Spanning Tree Protocol (STP) has been widely adopted across most data centers by networking engineers, but there’s also a new one called TRILL that provides advantages with a greater number of links across all the switches, creating a fabric. And then there’s software-defined networking (SDN) — a topic suitable for a complete book. By supporting future technologies, the switch prepares for future requirements and enhancements to the overall network.
- Commits its configuration across the entire environment. When adding another switch to a network, the perfect switch should be able to download the current configuration for the network and apply only the parts that the switch needs for its own use within the topology. This includes VLANs, default settings, route tables, and other applicable networking settings. In a sense, the perfect switch eliminates the need to update every switch in the network. The network fabric does this work for the administrator instead.
- Integrates with and understands the hypervisor’s view of the network. There should be integration between the switch and the virtual hosts and virtual machines (VMs). When a VM moves among the various hosts, it can bring its VLAN or have its VLAN ready for it to use. The perfect switch is smart enough to do that for the administrator. This eliminates the time an administrator has to spend configuring all of the VLANs necessary on every switch or supporting changes.
If I’ve seen it once, I’ve seen it a hundred times. Organizations start out by installing a few small-scale servers to back up and restore network data. Then as the business grows, storage needs increase, leaving IT departments to stitch together temporary solutions. These brief bandages often end up becoming long-term solutions and create a patchwork infrastructure that can leave organizations overspending time, money, and resources to manage their growing data stores.
This is exactly what happened recently to a point-of-sale and management software provider. The company’s ultimate decision on how to update its environment underscores every organization’s responsibility to proactively design its systems rather than take an ad hoc approach.
The company was quickly outgrowing its data center as business expanded, and it needed a cost-effective way to revamp its storage solution. Instead of updating its current servers, which would have cost thousands of dollars, the company opted to add on two EMC VNX storage systems, three Lenovo RD640 rack servers, and four Brocade ICX6610-48 switches from SHI. While the new servers were a step in the right direction for its data center refresh, the company had no plan for implementation. Instead, they asked us to step in and handle the update. (more…)
If your organization is like most, your storage array is one of your most valuable assets and also one of your biggest headaches to manage. But it doesn’t have to be this way. In a perfect world, there would be an ideal storage array, one that not only solves the most common problems in maintaining storage but also makes it easy and simple to get the best performance out of the system.
Here are five of the most common issues that limit how quickly and how effectively organizations can use their arrays, and how the perfect array might solve them: (more…)
We’re entering the heat of summer, and that means we’re all thinking about one thing: cooling down. For data centers in particular, this is a huge concern. Overheated data centers can cause big problems for organizations large and small.
While servers typically shut down when they hit their 99 percent heat limit to prevent serious damage, these shutdowns can corrupt vital information or wreak havoc on revenue. For the typical organization, unplanned downtime costs $7,900 a minute on average. For a larger company like Amazon, it can be as high as $66,240 per minute. Additionally, years of overheated data centers can cut equipment longevity, forcing an organization to replace servers every two years instead of every five. These costs add up. As data continues to grow, both in volume and importance, it’s more vital than ever for organizations to take data center cooling seriously.
Most organizations have some cooling systems in place to keep their servers running, yet few systems are perfect. Many companies settle for good enough, but this can put information and revenue at risk. Instead, organizations should look into ways to ensure data center health that range from the quick and simple to the inexpensive and impactful to the robust and long-term. (more…)
This post is part of a three-part series on ghost assets.
In my last two posts, I told the frightening tale of ghost assets, the once lively pieces of an IT department’s infrastructure that eventually expired. Yet instead of burying these dead devices, many organizations leave them to haunt their IT departments, and they end up threatening a business’s bottom line and compliance. In the final portion of my tale, I’ll discuss how organizations can finally lay these ghosts to rest.
Exorcizing the specter
Tools are not the answer — they’re only an element of eliminating ghost assets. There is no such thing as a one-shot, out-of-the-box, perfect configuration and inventory management product, though many manufacturers claim to provide such a solution. These products are open-structured and highly configurable, but require tremendous time, effort, and expertise to set up and maintain. Many customers find that they can’t fully leverage the capability of these products without hiring a dozen or more subject matter experts or paying exorbitant rates for long-term, on-site consultants.
IT asset management (ITAM) should be thought of not as a tool, but as process — one that encompasses tools, personnel, expertise, and procedures. Here are some of the best ways to return ghost assets to their graves and eliminate the risks they pose. (more…)
This post is part of a three-part series on ghost assets.
In my last post, I discussed the ghost asset epidemic many organizations unknowingly face. These assets were once productive test systems, but have since dropped out of focus. They are rogue machines, falling outside the spectrum of active management and are often effectively invisible to daily IT operations, yet these assets present serious monetary and compliance risks for organizations. In this post, I’ll explain how organizations conjure these ghost assets.
Abandon all hope, assets who enter here
If there is so much value in these assets, how are they so easily lost? From sepulchral server farms to phantom PCs and laptops entombed in storage closets and desk drawers, there are countless ways assets become ghosts. One of our customers calculated that ghost assets were costing them $1.7 million per month! How can more than $20 million a year just vanish? Here are some of the most common scenarios we see every day. (more…)
This post is part of a three-part series on ghost assets.
The vast majority of IT environments are haunted. Large-scale infrastructures, by virtue of their operational requirements, value high capacity and high availability over asset management. This inevitably means there are ghost assets lurking in most environments — devices whose purpose withered and passed on some time ago, but were not removed or repurposed. Still plugged in and probably connected to a network, they serve no material business purpose. They simply absorb space, power, and resources. A recent article on InfoWorld rightly points out that decommissioning ghost servers saves money on utility bills and datacenter space. However, these wraiths also embody a much more serious risk: software and regulatory compliance exposure.
Ghost in the machine
This post will refer to ghost assets rather than just servers. This term encompasses hardware, software, maintenance value, as well as any supporting systems that might be needlessly consumed by assets that no longer make a meaningful contribution to an IT environment. Power management, facilities maintenance, middleware, storage, backup, and disaster recovery are all secondary resources consumed by a ghost that add to its overall cost. But when ghost assets negatively impact compliance, the cost they represent increases exponentially. (more…)