DevOps – Delivering Business Value

Posted February 24, 2016 by Joe Tighe
Categories: IT Infrastructure

DevOps – Delivering Business Value

DevOps, or software development and operations, is a culture or practice emphasizing collaboration of software developers and information technology support personnel while automating the process of software and infrastructure changes. A DevOps environment is based on a culture of rapid software build, testing, and release.  DevOps processes encourage frequent and more reliable software releases spanning the entire delivery supply chain. Benefits include improved deployment frequency, which can lead to faster time to market, lower release defects, shortened lead time between fixes, and faster recovery time.

Collaboration

Close collaboration between software development and operational teams throughout the entire system service lifecycle, from design through development to support often fills in gaps and drives rapid product iteration, much like Agile development processes.   DevOps is also characterized by operations personnel making use many of the same systems and techniques as developers for operational support.  Those techniques can range from using source control processes and software to automated testing to participating during the Agile development process.  In addition, DevOps practices ensure that the enterprise development environments, physical environments, and processes are set up to deliver new builds into production as rapidly as possible, requiring tight integration of what have typically been separate functions.

Agile

Agile software development methodologies are an alternative to traditional waterfall development frameworks.  Waterfall includes step by step sequencing and phase gates for software development project management.  On the other hand, Agile development methodologies help teams respond to unpredictability through incremental, iterative work processes, known as sprints. Agile methodologies are an alternative to waterfall, or traditional sequential development.  Further, extending Agile development methodologies into the “Run-The-Business” operations delivers more tightly coupled system development and operational processes and faster product development and improvements.  An Agile DevOps culture provides opportunities to assess the direction of a product throughout the product lifecycle. This is achieved through regular short work efforts, known as sprints (iterations), at the end of which teams must present a potentially shippable product increment. By focusing on the repetition of abbreviated work cycles as well as the functional product or version they yield, agile methodology is described as “iterative”.  A DevOps culture encourages frequent business collaboration to better understand, change and refine business requirements.    Rapid prototyping greatly improves the process of business alignment and speed of business requirement definition.    Maintaining a tight feedback loop with the business user community is key to successful adoption of a DevOps culture.   A tight feedback loop can be facilitated by online user feedback forums, vBulletin boards and regular meetings/open discussion with key users and stakeholders.

Cloud Infrastructure

Cloud computing and virtualization are upending the traditional application development lifecycle and organizations.  Cloud enables significantly faster cycles of application management, which are imposing great change on IT organizations.  The critical foundation for the technical capability that cloud computing offers, automation, also requires accelerating the application lifecycle while significantly altering traditional DevOps processes.   Additionally, cloud computing forces enormous technical changes in the application architecture.  Much higher scale and load variability, higher performance expectation and new system costs are imposed by cloud computing.  In addition, cloud computing’s automation drastically reduces infrastructure provisioning timeframes.  Today, it is trivial to obtain cloud computing resources in minutes instead of the weeks (or months) it used to take to obtain resources.  In a cloud environment, it is clear that today’s lengthy infrastructure provisioning timeframes are driven by organizational processes, not the underlying cloud infrastructure resources.

SOA

Another key tool of an optimized DevOps organization is SOA.   A service-oriented architecture (SOA) is an information technology architectural pattern in software design in which application components provide services to other components via a communications protocol, typically over an IP network. The principles of service-orientation are independent of propriety vendors, products or technologies.    A service is a self-contained unit of functionality, such as retrieving an online bank fund transfer and may be discretely invoked by an application.   SOA combined with web services and XML makes it easier for software components on computers connected over an IP network to cooperate. Every computer can run any number of services, and each service is built in a way that ensures that the service can exchange information with any other service in the network without human interaction and without the need to make changes to the underlying program itself.  SOA is a foundational tool in the DevOps organization.

Security & Monitoring

DevOps provides a significant opportunity to the way information technology security controls are implemented and ensures they become business enablers. Security participation on the DevOps team is essential to making system security a by-product of everyday work processes for DevOps.  DevOps integrates a number of functional areas, including information security, into the final work product. The major difference in DevOps is everyone’s input is brought in early in the cycle then automated ensuring short, predictable version release time and quality.  This process ensures security is no longer the bottleneck in the development process.  Rather, security teams gain detailed visibility into the computing environments and accelerate security risk assessment.    Business goals and security are better balanced in a continuous software delivery model ensuring business and technology alignment.  Standardized security build configurations enabled rapid builds.   Application log outputs are one of the most easily implemented types of monitoring because running code already has output.  If applications and services are distributed, centralized logging should be included to provide the full benefit of monitoring that is essentially already in-place.  Easily traceable exceptions in a production environment dramatically reduce both downtime and support costs.

Toolkit

An optimized DevOps organization must have a great set of tools to succeed.   These applications are based on years of experience in the IT industry, dealing with development and operations. From building a petabyte-scale, data analytics infrastructure, many of these tools and processes has become key components of successful organizations.  These tools have been carefully selected:

  • MS Visual Studio Test – Unit Test Automation
  • SoapUI – Automated functional, regression, compliance, and load tests.
  • Nessus – Security Vulnerability Scanning
  • Nagios, Cacti, Ntop – Monitoring
  • GIT – Version Control Automation
  • Puppet – Scalable Infrastructure Automation Management

 

Business Value

Organizations can accelerate application development, increase operational efficiency and accelerate time to  market using DevOps practices.   Organizations can show customers, in near-real time, what software developers are producing. This almost-immediate feedback loop aligns what is being produced with business needs and reduces time to market for products.  Code quality is improved, downtime is reduced and      productivity is increased delivering true business value.

Net Neutrality – 10/22/2009

Posted October 23, 2009 by Joe Tighe
Categories: Regulatory

Tags: , , , , , , , , ,

The US government is proposing broad new regulations for telecommunications and cable internet service providers.

The new proposals appear to target specific providers for regulation and government oversight.  Specifically, Massachusetts Senator Ed Markey has proposed the Internet Freedom Preservation Act of 2009, or the “Net Neutrality” bill, outlining government policies to impose new governance and restrictions targeting telecommunications and cable providers AT&T, Verizon, Time Warner and Comcast.

The proposed is based on the unfounded fear that service providers will “control who can and cannot offer content, services and applications over the Internet utilizing such networks.”

The Markey bill indicates the vast majority of consumers receive services from only one or two dominant internet service providers.  And, the bill says the national economy could be harmed “if” these providers interfered with access to internet applications.

The bill proposes regulations imposing equal treatment (eg price/performance) of all internet traffic and content, regardless of content type and delivery costs.  Specifically, the legislation proposes internet service providers could not sell prioritized internet applications or services.

The first problem with the proposed legislation is the lack of recognition of costs to provide internet services.  Some applications, such as video are bandwidth hogs and require significantly greater network infrastructure and associated costs to deliver when compared to the network infrastructure costs to deliver email access.  Under the proposed legislation, services providers would have to charge the low bandwidth users (casual browsers and email readers) more to offset the higher costs of the video users.  One result of the proposed legislation would be less consumer choice and a hidden “bandwidth hog tax”.  Today, most service providers offer tiered products and pricing to consumers and businesses to account for the additional costs to deliver bandwidth intensive applications.  You pay more if you use more under the tiered pricing model.  These are not “discriminatory” practices.  Rather, tiered pricing and application prioritization are sound business models delivering reliable, profitable product choices and unburdened internet ecommerce.  Consumers and businesses currently have choices.  The proposed legislation takes away choice and increases costs to consumers and businesses.

The second problem is, certain applications such as voice and video over the internet require prioritization and special treatment to work properly.  The proposed legislation makes existing application prioritization products and networking practices illegal.  Internet service providers would have to dismantle these services to make all internet applications “equal” with no prioritization schema.  The new legislation would kill off reliable voice and video over the internet as we know it.

The third problem with the Net Neutrality legislation is anti-trust and federal trade regulations are already in place to protect consumers and business from monopolistic practices and unfair trade.  For example, when AT&T disconnected MCI customers in 1974, MCI filed and won a successful anti-trust lawsuit resulting in breakup of the AT&T monopoly.  Another example is, the Federal Trade Commission recently investigated possible antitrust violations caused by the Apple and Google sharing two board directors.  Arthur Levinson has since stepped down from both Apple and Google boards.

The US government would better use taxpayer dollars and valuable legislation time by asking two questions:

Which companies are hiring lobbyists and launching advertising campaigns promoting Net Neutrality legislation?

What is their agenda?

Net Neutrality legislation is not needed.  Consumers would have less choice and higher costs.  Internet service providers would incur additional costs and compliance overhead.  Taxpayers would pay higher taxes to create and support additional government oversight organizations.

What business and consumers need is effective interpretation, oversight and enforcement of existing laws and regulations.

Disclosure – Joe Tighe has no paid relationships, products or    endorsements from any company, political or government organization cited in this article.




Internet Blogs - BlogCatalog Blog Directory

Discovery – The Art of Finding Reliable Data

Posted June 13, 2009 by Joe Tighe
Categories: Application Inventory, Discovery, IT Infrastructure, Mergers and Acquisitions, Project Management

Tags: , , , , , ,

There are only two kinds of application engineers: those who say
“Data is the problem” and those who say “Data are the problem.”

The infrastructure team was provided systems discovery information from a variety of sources including internal IT, contractors and key IT support vendors. The team was delighted to see “current-state” systems information. However, the team became concerned about the accuracy of the information because key application end client information was missing. The consolidation team decided to perform first hand verification of all “current-state” data. The program manager consulted with the sr management team and hired contract systems engineers with extensive application and desktop experience to review “current-state” discovery information and verify the information. In many cases, the contract network engineer had to reverse engineer networking equipment and actual protocols to obtain an accurate “current state” data. In addition, acquiring company subject matter experts were engaged to verify and document as-built server and storage system documents. For example, reverse engineering revealed the current state network documentation did not reflect actual configurations and existing routes. In another case, another key subject matter expert identified networking equipment was owned and maintained by telecom provider Sprint instead of by the company. Additional verification revealed remote access equipment had been partially configured and was unusable. The original project had been abandoned by the IT staff short of being fully configured for operations. Application verification and reverse engineering identified software code that had been written with network third party vendor addressing and communications hard coded into the software bypassing the data file inter-exchange through the company data interexchange server.

The verified “current state” was significantly different than the “current state ” provided by vendors and staff.

Lesson learned:

There is no hard-and-fast reliable information source. That would be too easy.

IT Contracts – Due Diligence

Posted October 28, 2008 by Joe Tighe
Categories: Application Inventory, Contracts, IT Infrastructure, Mergers and Acquisitions, Project Management, Supply Chain

Tags: , , , ,

 

A Slight Misunderstanding

 

In one acquisition, a domestic technology outsourcing company was contracted to provide key IT services through a twelve month transition period.  Upon signature of the acqusition agreement, the integration transition team asked for the IT equipment access credentials.  The outsourced service provider, a well know M&A services leader, refused to provide the credentials.  The problem turned out to be a difference in interpretation of the contract agreements by the IT service provider subcontractors, also well known industry leaders.   The acquired company spent the next four months renegotiating the contract and the subsequent release of equipment access credentials. 

 

A review the transitional services contract revealed vague contract language with no enforceable service level agreements or “exit” clauses.  The acquisition company management team implemented a successful strategy of appointing key staff members to review all service requests and develop direct line relationships with the subcontracting IT service provider to ensure delivery of services.  The acquisition company had several large contracts with the subcontractor and had considerable financial leverage in driving service delivery.

 

Lesson learned:

 

All misunderstandings concerning contract specifications will eventually be resolved.  The best time to do that is before the acquisition agreement is signed. 

 

Tip: Hire dedicated IT contract M&A specialists to review, negotiate and (re) write all IT contracts prior to acquisitions. 

 

https://joetighe.wordpress.com

Application Inventory – Due Diligence

Posted October 19, 2008 by Joe Tighe
Categories: Application Inventory, IT Infrastructure, Mergers and Acquisitions

Tags: , , , ,

 

Application Inventory Strategy

Acquire business application information during the due diligence phase.  Adopt a strategy that will identify the most critical applications in advance of the acquisition and focus the available task resources on critical business applications first.  Where the target company lacks application information, immediately allocate skilled M&A IT resources to reverse engineer applications.

 

Key business applications are often undocumented or misunderstood by the target company.  

 

A company with mature information technology internal controls will have documented detailed application interface inventories.  Business application stakeholders and lead information technology personnel for each application will be documented.     However, many IT departments do not have mature internal controls and systems documentations, in particular startup companies or smaller enterprise organizations.

 

In most merger and acquisition programs, expect minimal application inventory documentation.  Review the target company business continuity plan first for information.  However, expect outdated and inaccurate application information.  Key applications may missing from the documentation.  Additionally, there is often no technical description of the application interfaces or technology owners.  Business owners are often outdated or inaccurate.  Be prepared to assign experienced M&A system engineers to reverse engineer applications.   In addition, be prepared to assign skilled M&A systems analysts to interview business units to identify ALL key applications and to interview software developers to identify application programming interfaces.  And, expect a low level of cooperation and urgency from the acquired company development team. 

 

Expect significant application data gaps and identify the risks early on, during the due diligence process if possible.  Some development departments operate independently from the operations departments creating additional gaps in integration knowledge.  Allocate sufficient time to perform application programming interface discovery.  External contract engineers can be utilized as needed to reverse engineer applications and provide interface inventories. 

 

Allocate adequate discovery time during due diligence to close the application knowledge gaps.  The following resources should be allocated as part of the overall IT M&A program:

 

·         Allocate 75% of your time to collect application interface data.

·         Fill in the blanks by reverse engineering core applications.

·         Run a trial cutover on low priority servers and desktops to identify gaps.

·         Allocate 25% of your time to resolve application issues during conversion.

  

By following this strategy, you will focus the integration teams on key applications having the greatest impact on the business and you will increase your rate of successful IT M&A program outcomes. 

 

 https://joetighe.wordpress.com

 

Project Management Scaling

Posted October 9, 2008 by Joe Tighe
Categories: IT Infrastructure, Mergers and Acquisitions, Project Management

Tags: , , , , , , , ,

 Project Management Scale – Acquisition Integration

 

Be careful how you scale the information technology integration project management effort.  You might get it.

  

IT Infrastructure integration efforts can be brought to a standstill by excessive project management controls.

 

In one mid-size enterprise organization merger, the project management office (PMO) was often a valued change agent in the IT infrastructure conversion project effort.   However, the type of project management controls and processes applied were often over-scaled, overly complicated and more suited for large scale government programs.  If your project management team is oversized, you can expect excessive overhead costs and staff meetings driven by junior staff members adding little value to the process.  A large project management organization may offer an excellent training ground for junior staff and contract consultants however, the resulting output will often be of low quality or unusable.  The overhead administrative costs relative to the size of the acquisition may be unacceptable. 

 

By scaling the project management effort to a mid-sized acquisition, a simplified set of project controls and a smaller project management staff may be sufficient.

  

Project Management Scaling

 

The first rule of organization is to scale to fit the size of the task. 

 

For mid-sized acquisition efforts, consider limiting the project management staff to two or three key members.  Ensure the project management staff reports to the acquiring company’s program management office.  This will serve to appropriately size the integration organization to the task, provide appropriate governance and ensure alignment of the local project management teams with the acquiring company integration objectives.  

 

Trim the fat.

 

https://joetighe.wordpress.com

 


Follow

Get every new post delivered to your Inbox.