> A rare breed in the boss stakes, for sure. Then again, this is his story after all.
Very nicely put ... well done
We start the working week with a Who, Me? reminder that in the world of databases, there is a difference between knowing you're right and knowing your rights. "Joff" submitted today's story, which takes us back to the mid-2000s and the divesture of his company from a US multinational. He was running a team tasked with …
My hat is off to you, good sir or madam.
I do not think I would do well in management, for I do not give enough of a sh*t to exert the effort.
I shall stay here with the working engineers, and create stuff, for that is what I know and that is what I am apparently good enough at not to have been discharged for incompetence, lo, these thirty five years.
Thanks, but I was merely commenting on "Joff", I also am a techie by heart. I've sniffed at management and didn't like it at all.
If however it comes to politics and back stabbing, I am quite good. A long time ago I was temping somewhere and something happened that I didn't agree with. Neither did the two persons directly in line above me, but they thought they were powerless. Comes along the head honcho (five levels above me) on his daily round and I make a nice, complementary remark about the how this new policy made work go a lot quicker. The guy two levels above me just about gave birth to a porcupine ... in breach position. Within ten minutes, work was stopped and the person responsible (three levels above me) got a bollocking without ever finding out where it came from. Within one more hour, the situation was corrected.
Mentioned it before I think.
The great Manchester Comms fire.
I was first in that day and couldn't reach anyone on the DR teams.
So took the decision myself to shift all linklines to other offices then waited for the fallout.
Luckily we were trialling Sametime so at least had some semblance of real time Comms given even mobile networks were down.
All worked out in the end.
By lunchtime I had a single working phone line courtesy of Otis lifts the floor below who were on what's now Virgin with a cable strung out the window to the floor below.
2 days later my colleague at our main site in Bournemouth had to crash a DR meeting to explain that just because the Manchester office could call ourselves didn't mean we could move everything back!
Happy days. Not...
Dave
I remember the day our Security team lectured us about the importance of system security (can't remember the exact kerfuffle at the time, but it turned out to be a non-issue anyway).
Queue 2 days later and they are in a panic. A change was made on one of their systems (in relation to aforementioned kerfuffle) and they had locked themselves out of the root account - which they needed access to.
They call me out of desperation, and it takes me all of 5 minutes to identify a root owned 777 permissions file in init.d which I used to hack into the system....
They were actually really good guys though, and we all had a good laugh about it.
Nice solution to the problem, but things like this have a tendency to backfire the next time. I would have documented the request and every communication to the DBA team to get the database up in time (with proper date and time) and let the deadline whoosh by. In the subsequent auto da fé the DBA team would be for the pyre.
Not just document, but also start raising it up the command chain.
So as the days go by without a response - you pass a message to your management that you've had no response. And you escalate the tone of those messages - so a few days beforehand, you are telling your management that you will not be able to go live "because this other department are not co-operating".
That way, you have evidence that you did all you could to make things happen, AND you can document that your management knew there was a problem, AND you can document that you did everything within your power/authority to make things happen. If the brown stuff does hit the air movement device, then you should be out of the firing line - or at least you've made yourself a good umbrella. But hopefully, if your management is any good, they'll have made things happen and the problem will have been resolved before that happens.
It's not mentioned in the article, but it's also good to make sure that in your request it is clear what the timescale is - you don't want the other side to be able to use the excuse that they weren't aware of when it was needed/it was urgent/whatever.
"because this other department are not co-operating"
Been there, done that, was ignored. Got so pissed off with the clusterfuck that I resigned for the sake of my sanity. Eight months after the original go-live date I'm told that it has finally gone live, to what might best be described as low acclaim. Now several former supposed colleagues are being forcibly resigned. I won't weep for them.
I've found that works in some companies, but not in others. One company I used to work for was so good at that approach that very little got done in any decent timescale because of the sometimes-circular blame-chains that developed (some links justifiable, some just well-worded excuses).
Another company almost had the opposite problem. If something landed in your lap and there wasn't enough time to get the appropriate people involved, it was your job to fill the gaps to the best of your ability. No excuses. I've set sites live before with no input from the persons who usually supply content, imagery and design, nor any legal oversight on the various policies, terms and conditions that were required (all requested several times over a two-week period). At point of set-live, I made it quite clear to every senior manager involved in the project what I had done, why I had done it, and what really needed to be reviewed and by whom at the earliest opportunity.
A month later, still no changes.
Was part of a project where I needed about a day's input from another team so I could finish my tasks, but their guy on the project had been grabbed for something else. I started emailing him and his team lead (and .cc-ing my manager and the project manager) about 6 weeks before the pilot was due to start, saying I either needed their resource back, or somone else asap. I did this at least twice a week, as well as speaking to the team lead whenever I saw him, until a fortnight before the pilot, when I started doing it daily.
Three days before the pilot, I was working away and got a call from my manager, the project manager, the other team lead, and a couple of the others involved in the pilot, angrily asking why I hadn't delivered yet. I politely pointed-out that I'd been asking/emailing for 6 weeks without getting anywhere so they needed for somewhere else to point the finger. I also forwarded them all of those emails whilst we were "discussing it". They allocated someone to work on it that day, and I did what I needed remotely the following day. Sorted.
When my manager called me 15 minutes later to apologise "for putting me on the spot, but they'd come to his office on the warpath", I told him in no uncertain terms that he should be eternally thankful I was 250 miles away, otherwise I'd be taking him outside and punching him in the fucking mouth - career-ending or not - and that went for any future occasions also. Never had that problem again...
Don't leave us hanging - why were the DBA team able to detect abuse of permissions nearly instantaneously, but not able to detect multiple emails asking when the database would be ready?
I've had similar in the past: you ask for something to be created and the team seem to think that creating said item is the end of their responsibility and that telepathy or osmosis will serve to get the information needed to access the new resource to you.
Strikes me that this DBA team are a classic example of the JIT resource team. If you need something by date X, they take pride in supplying it on date X, and not a second earlier. They could supply it earlier, but why should they do that if you don't need it until date X? The fact that date X is the last possible date on the schedule, and if it arrives a few days earlier it will make life easier for someone else doesn't seem to cross their minds.
Agreed with JIT being the problem. The timing of the DBs trying to do their thing the day before go-live makes me think that Joff and team asked for the go live date for DB availability. In a project like that, you ask for a delivery date early enough to do your testing, plus any necessary buffer to resolve problems (and to escalate of the DB admins don't deliver).
If you tell me you need something by Saturday the 27th, and I deliver on Friday the 26th, don't blame me if you will be late because you still have four days worth of testing to do, and just assumed that I'd deliver at least a week before the due date.
Also very possible that Joff and crew were transparent about their timeline needs and the BDB guys screwed up. If so, Joff should have been raising red flags about risks to the go-live date due to the missing DB support. You do that diplomatically, pointing out that the DBs may have legit constraints (Joff may have been rolling out a new system to track the daily specials in the cafeteria while the DB team may have been rolling out a new eCommerce platform to double revenue). That way you have cover.
If you tell me you need something by Saturday the 27th, and I deliver on Friday the 26th, don't blame me if you will be late because you still have four days worth of testing to do.
Fair enough. But if I want it BY Saturday 27th so that I have four days of testing, that doesn't mean I want it ON Saturday 27th. If you manage to deliver it on Monday 22nd, I'm not going to complain. Like I said, it seems to be a matter of pride for some teams that they won't deliver until the very last moment, just because that's the way they like to work.
I worked as a contractor on a large national project where a lot of sites were linked to a centrally sited Oracle database. Like most projects, even though the Systems Architect had produced his overall, and signed off, design, the project suffered from the usual creep due to permitted customer changes and observations by the grunts (aka coders) of flaws in the design. As usual, coders observations were initially ignored and, only after manager interventions accompanied by the time-wasting meetings were they reluctantly implemented.
This process was all well and good for logic changes but, if they involved database changes, you were really up a gum tree. The DBA's considered themselves as Gods and, any changes had to be challenged, approved and (possibly) implemented by them. They really were a bottleneck.
The whole thing became tribal after the proggies discovered an early open-source database tool and started to do their own thing. I remember a DBA spotting a screen showing the data in a table with it's attributes and a structure graph and he screamed "you cannot do that!"
Yes, I know that DBA's have a very important position (and salary) but, during development that should be more approachable.
I had a short contract as a DBA to cover somebody's 2 week holiday plus a week overlap either side. Most of that seemed to be occupied with the paperwork to get permission to add another chunk to the Informix database and to get the sysadmin team - who were separate and whom I never even saw - to allocate an LVM volume for it. This was in the same industry where, in a much large operation, my team handled everything - Unix and database administration, development and support. The contract clients are no longer in business.
That sort of narrow silo'ing has become more and more popular in the managed services world. I expect the reason is because they wanted to hire cheaper people who had a 'certification' in one thing. People who know for example Unix admin, EMC storage admin, and Oracle DBA are a lot more expensive than someone only able to handle just one of those.
When I've contracted for such managed service providers I've tried to impress upon them the usefulness of at least having a few senior people who are skilled enough in multiple disciplines that they are allowed to "scale the wall" and for instance execute a complex change that requires working in more than one silo. The coordination required during changes is a kind of a pain ranging up to a nightmare otherwise.
Of course their real plan is to have everything done in India, at least until somewhere cheaper is found, so I guess from that perspective this makes sense as I imagine a guy fluent in multiple technologies can pretty much write his own ticket in India and wipe away most of the savings you'd otherwise get from outsourcing so they actively DON'T WANT to encourage the ability of such individuals to have a role in the managed services world.
Unfortunately it sucks for clients, as the more coordination that is required the slower things happen and the more mistakes that are made. But hey, a lot of that is on them for constantly choosing the lowest bid and telling their MS provider they expect a significantly lower price when they renew if they want to remain. So then you not only have the overhead of the silos and walls, but the shitstorm handover process from one MS provider to another where the incumbent has no incentive to make things easy on the new provider, and the new provider has every incentive to place blame on the incumbent for everything and start recommending change orders to fix everything they've supposedly done wrong (when much of the time it is just "we don't understand how the incumbent was doing things, and can't be bothered to try to figure it out ourselves")
I wonder how much of he siloing and stuff is partly due to senior managers (c-suite types) having come up through the fashionable "management is a unique skill and you don't need to know anything about the product" system. It seems to foster a view that sees the whole as just being a sum of the parts- to be assembled elsewhere - which may be true for some things, such as say a car. But is useless when the product is a one-off and the parts need to be fitted together during the process.
I once had a very similar situation. I walked into the DBA team manager's office, and said ever so politely "This means that Quentin Senior-Director [name Regomised] won't be getting the new application he has been promising everybody for months. Will you tell him or shall I?"
I have never seen a DBA move so fast.
Gov't agency had multiple Cisco VoIP systems that a contractor was supposed to be converting into a single system, One Ring to Rule Them All being back in fashion. The problem was that the contractor was shiite when it came to their technical skillset, as opposed to their world class VIP bribing skillz. I had been the field manager for all of those multiple VoIP systems when we installed them, but had since gone away from Cisco and let those certs expire.
Seeing the impending doom approaching, I tried to point out problems and provide some guidance, but since I no longer was certified, they ignored my warnings. I then used my unrevoked access to copy all router and switch configurations for every system, and made full backups of the VoIP servers I'd set up.
Moving Day happened, the contractor switched everything over, and nothing worked. The contractor's techs went spastic, the agency's phones stayed off-line, and all was chaos. They hadn't set up a way to back out of the 'upgrade' (one of the problems I'd noted) and weren't able to make their setup go live for more than one site. At that point I got in touch with the CIO and his boss the deputy director, and asked them if we were allowed to return to the previous setup. Some calls to my fellow site techs later, we had the old VoIP servers restoring and I reverted router and then switch configurations as well. At the end of a rather long day, everything was back up.
The contractor did their best to claim that I sabotaged them, and my network access was revoked. The CIO agreed with the contractor, but it turned out that the deputy director had not gotten any dosh from them, and was not having any. The contractor was turfed out and the CIO 'left to pursue other opportunities'. I stayed on as a site tech but took early retirement soon afterwards, as was 'recommended'. As we know, no good deed goes unpunished, this goes double for government work.
I once remember coming into w**k to find that my admin access had been revoked. I thought it was a colleague playing a prank on me, and had a jolly morning playing pranks back until we came together and realised that we were both locked out.
Fortunately we were able to continue w**king cos we knew the password to the 'Backup' user, that just happened to have admin rights at that time. Can't remember the reason for that, but being more than 30 years ago I'm prepared to believe there was an innocent reason.
It turned out that our PHB had heard that admin rights should be controlled, and decided to take them away from his two system admins! And hadn't even bothered to tell us! When he resisted giving those rights back to us we just started routing EVERY task that needed them to his email/phone. Surprisingly, everything returned to 'normal' less than a week later.
Had something similar happen to me and a team-mate. Our manager had requested that some admin roles in our main application be removed from him as he didn't need them, and the person removing them also removed them from us because he thought the idea was that nobody in our team needed them. But they were required for a significant part of our work, so we got them put back without much fuss (although there was a little bit at the beginning, when I logged into the system around 7pm to do some data updates required for the overnight run only to find out that I couldn't access the screens I needed - fortunately there was one other person still in the office and he happened to have admin access, so he gave me mine back on a temporary basis until we could work out what had happened).
The OP forgot the first rule of scheduling. Don't tell anybody your _real_ drop-dead date.
Tell the people you need things from your drop-dead date is several weeks/months before when you really need it, preferably at their minimum "request time" they require.
Always give extra time for your delivery than what you really need.