Our vendors always deliver best price
It sounds so silly, but it always happens …. the longer you know each other and the longer a partnership exists, the better it gets? No not really in 90 % of all partnerships, especially vendors do all their best to get better prices, get out of project price agreements, anounce new hardware which is out of contract and lot more.
And they know you, your setup, your hardware and software, they know your employees and their habbits and behaviour and they do all their best to support your organization to get you and better prices (for them).
This is not wrong, it is their business intention and this is not as bad as above market prices because of buddies within upper management (even this should happen, I’ve heard about it 🙂 )
So how to deal with: Trust your partners in terms of what they offer to you in terms of the concept behind, handle them carefully as they know you and the underlying organization. Handle them with care, as they did their job good for a long term and business partnerships can create business value. But keep on challenging their models and prices ALL the time by getting in either challengers or direct competitors. The 3 offer approach of large companies is not that silly, yes it is work, but the intention is to know the market price and get a price below.
But then you challenge them, never press price below their line, you should be interested in a vital and profitable partner, but you do not want to overpay. Keep the balance right!
Next, let them know that you challenge their model and price, so they will keep on deliver you (their) best results possible, but give them trust, that they are a preferred or strategic partner. So inviting others should show them, that you are interested in best price and best model. It should not show them that you are interested in changing your vendors. If you intend to do so, tell them why, what happened and how you are willing to cooperate with them in the future. But never blame your long term vendors!
We know it better
Potentially yes, but how to be sure? It is of tremendous importance that a growing organization knows what it is able to deliver and how to get additional knowhow/expertise/resource on board. This is not a prayer for externals but:
- understaffed organizations fail on a long term view
- there is no more space for innovation, thus you stop developing you, your employees, your organization
- Peaks should be offloaded, innovation stream not be broken
- you cannot know everything the way the market does
So keep in mind, that you should only run your strategic stuff and if you start innovation, bring the market experience in. This does not mean that external partners do all the work. Where necessary, they should support, assist, coach you, but you are the one who knows your business best and if you bring in externals, do not forget to manage them. They are just people, maybe good ones with special expertise, but they are people like your employees are, the need order, direction and management. An uncontrolled external is a potential high risk for your organization as he is seen as a special knohow carrier and he can not only break your project, he can damage your organization too.
So to summit, two things to say: Bring in external partners where necessary, nobody expects you and your org to know everything better than the market. And last manage partners to deliver successful and in-time projects.
BCM is not Infrastructure
It is all time the same. Nobody know why, but the myth of “we must have bcm” is in everyone’s ear within IT department. First enthusiasts start reading what BCM (Business Continuity Management) should look like and immediately the first failure begins:
- BCM is a Business topic
- IT should be invited to join business wide BCM and not vice versa
- BCM is a process driven topic, not infrastructure
So BCM is much more a company wide umbrella and as IT Service Continuity Management in ITIL v2 already claims, the mission of the IT is to support the Business Continuity, not to run, drive it.
What we often see ist that there is often seen a direct link between resilience, high availability and BCM. Yes, at the end, all those kinds of providing a more robust service will likely be part of the mitigation plan of the BCM initiative but keep in mind that BCM for IT is:
- Define business critical processes
- Define technical services supporting those
- Assess and analyse risks assiociated to those services
- Define action plan for those risks (accept, mitigate ,delegate, solve)
- Define operations handbook and declare how desaster process is invoked by ops stuff
- ….
Reading those lines shows that only the mitigation/solve in the risk plan is a real “technology” part.
Definitely it is always a good idea to think about risks and how to mitigate, solve or whatever. BCM is much more a method to drive your thoughts/ideas to the mission critical services, nearly nobody will need a resilient lab 🙂 in terms of a desaster (except you are a research organization). So focus on your value supporting technology and do your best and stop thinking in a tech-way only, it is the business which should be supported, not the nice blinking light in your Rack ^^
To cloud or not to cloud …
…. that’s the question
I am not intended to talk about the general facts about clouds, cost topics and the different types (PaaS, IaaS, …) And I’m too not interested in referring about whether an offer is SaaS or Cloud or not. My topic is much more operational and what I want to show you is that a cloud as a private or public cloud does not solve all your problems as you potentially intended to do so.
If talking about PaaS/IaaS a cloud is nothing more than another type of infrastructure provisioning, nothing more, nothing less. You still lack support for your application and in terms of public clouds you will not be enabled that easy to run a loadbalancer or other resilient stuff that easy.
Despite the fact that all major cloud computing offers try to declare their cloud as save, unlimited, borderless resilient and (d)dos-attack safe, we still know that Murphy exists. Talking to vendors today talking about resilience mostly ends up by them telling ungecky sentences like “a cloud can’t fail …”
So if you are aware of potential risks and if you already know how to deploy your app(s) in your favorite inhouse or external cloud you should still think about how operations changes by using a cloud? Be aware of topics like:
- how about backup/archiv?
- am I still able to fulfill all my current and foreseeable compliance stuff and how?
- how will my release process, my associated toolbox and my service support process change?
- Is it a strategic or an economic value and how to live with?
- Is my ops platform able to run in the cloud?
- Is there any benefit from using a cloud or is my app still missing some major advantages?
- Do my vendors and their license support their app in the cloud? Is the licence cloud-enabled?
- Do I need special hardware?
The main topic I see right now is that we all talk about how cool it would be and how nice and easy everything should look like in the cloud but I will only talk about Operations and there are still a bunch of unanswered questions. So please don’t say yes or no to a cloud because of style or your personal relation to the vendor, think about the questions above and if you can easily answer all those you should really think about running a cloud.
2 people are a NOC
Despite the fact that people cost money you will never be able to run a commercial 24/7 site with just 2 people in a secure and safe 24/7 way. On the one hand side you burn your employeees on the other hand side you will potentially break local law regarding workers rights.
And, to be honest, If you want to run your application you have to think about what the work of an NOC will be, is it?
- monitoring, remediation
- reporting, escalations
or even?
- application support
- ops tool maintainance
- tasks like backup, rollback
- qa topics?
the more you think about the more you will come to the conclusion that an active NOC can be a major advantage for your organisation and business. So, if it is not built as a technical Call Center but moch more as the name claims an operations center, than you will gain major advantages. But his means, that you need a structure and the right people, not 2 potentially not hundreds. And you need time, a working NOC is not a matter of a bunch of definitions and nice mission statements. You need role separation (dev, sys engineering, ops, noc), technical clarification and setup of the NOC itself, including processes, space, people and resources.
So what does this mean for startups?
You should potentially think about a shared NOC or think about when the right point will be for thinking about a NOC. And believe me, there will be long time no potential need for getting such an or structure up and running. Try to work based on OnCall procedures as long as possible. NOC costs money, even if there is major benefit. And a NOC requires working structures and procedures. So only start building a NOC if you are already aware of processes.
Developers are good Operators
This is definitely what often happens in startup companies. You have an idea, you have a development team and you have business/expenditure pressure. The easiest way is to let developers do IT Ops stuff (backup, monitoring, day2day operations …) But what happens if you do so?
First, there is no difference between an inventer (developer) and an operator thou there is no border between fancy new stuff and a stable platform. Secondly, there is no declared transformation process which shifts the proven and checked app to the operational platform. Third, those 2 groups think totally different. While a developer is interesting in getting new fancy stuff done, an operator’s holy grale is to provide a stable, reliant and proven (never changing) platform.
Next if you don’t differ, you will have real trouble keeping dev tasks out of the online/productive environment. How fast it would be for a developer to fix a bug online, if he don’t has to test it in the background? But this means, acting online, thou no protection line, no border for safetiness between.
There will be endless discussion and yes, there are even examples out there showing that developer and operator can be one person without any probs, but the vast majority of IT people out there will not be able to do the same.
So keep in mind: Different tasks with different attitude and behaviour need different people!
The idea behind
Thanks to Mona Pearl, after my presentation of Ops Does and Dont’s in Berlin during the European Venture Market she motivated me to bring that knowledge upfront and to start a blog. To be honest, I thought on that even for a while but I did not take the time to start creation the blog.
So what is the intention of the blog? Day by day I see IT / Online Operations failing (Opstakes aka Operational Mistakes) based on different reasons. The question is, whether you can avoid the same failure in your company or not? And, to be honest, if you do native online business and ops fails, you will get lost. So there is tremendous more pressure on doing the right operations the right time.
I – Tom Peruzzi – and aicooma , the company I founded and run, do a lot of operations and startup operations stuff so this is much more a “real live view” than any unversity stuff (Tbh I really like University work). If anyone of my customers thinks that it is his story feel free to comment, I will neither post any names/companies, I am interested in explaining and discussing “generic” failures, if it happened in your company too, feel free to comment on it.
Please support my blog, it is the first time for me and hopefully I will be able to join a vital community with you ops failures all out there.

leave a comment