Java call stack – from HTTP upto JDBC as a picture
June 6, 2006 110 Comments
Created this by pasting together a few screenshots from a NetBeans profiler session – of a Spring + Hibernate web-app running within JBoss. Quite interesting to see how the business logic is just a tiny part of it all.
Java EE is a lot about abstractions which I have grown to appreciate over the years. However, I find this very difficult to explain to my colleague who sits just across the room – he is a Mainframe veteran with tons of experience :) in fact I have not shown this to him yet!
You can download a better PDF version of this which you can expand and zoom to your hearts content.
[update 2006-07-27: a post with a Ruby on Rails stack trace]
No wonder java web applications are slower then php spaghetis even java itself is 10 times faster than those…. especially the abstraction of Spring-acegi-hibernate is horrendous. i believe they should go to the drawing board and stop trying to be “extra-flexible” and “extra non-intrusive” sometimes i am wishing to use the jsp scriplets with direct jdbc calls ..
Looks ugly, but how long does it actually take to execute? May be it is not that slow at all?
Not slow at all :) I have run JMeter load tests with excellent throughput results and will post the data soon.
Let me say that the productivity and elegance I have experienced with Spring MVC + WebFlow is superb – I wouldn't have it any other way. With Acegi, I don't need to deal with JAAS anymore and the security is fully portable across WAR containers.
I see the frameworks as code which I don't have to write to do things right – and my code ends up maintainable and standards compliant. One take on this is that with Spring, you *actually* end up being able to focus more on the business logic…
Ok, so what if the *virtual* call-stack is huge? The actual runtime call-stack which is optimized and inlined by the VM might be *much* smaller. The hotspot VM uses 2 call stacks – an optimized call stack and a virtual ‘source-level’ call stack.
Even better, is that the bytecodes in those methods are getting profiled and optimized all the time, with runtime optimizations that aren’t possible at compile time.
Tell THAT to your mainframe friend! :-)
Don’t you just love stepping through those indirections?
Tarun,
You are challenging my curiosity now – are there any tools to view the runtime stack? It doesnt need to be a pretty picture like what Peter has produced here.
A call stack ten times that size would still be negligible compared to the network comm latency. Get the perspective right.
“A call stack ten times that size would still be negligible compared to the network comm latency. Get the perspective right.”
You’ve missed the point of the post. He’s comparing the business logic to the basic machinery to move some bytes.
Besides, the networking code would simply increase the call stack.
And further, you’re even wrong about your basic point, the real bottleneck is likely the persistence (database), anyway.
Just to throw in my EUR 0.02:
I’m not a developer but an administrator of (also) J2EE engines (SUN, SAP and IBMs) and when I see something this, it really comes to my mind doing something very different…
This is horrible, insane and disgusting all at the same time. It MAY BE nice-and-nifty at best for developers but all this bloat in the memory necessary to change some persistent data and finally getting an “change done” to the browser after stepping through three dozend layers? Uff.. call me ignorant but I still believe that this could be done in a much nicer way.
I remember that some years ago everyone was crying about the “DLL-Hell” of M$, now what is that then?
—
a frustrated administrator
I would guess that many of those calss are indeed turned into “inline” calls by the JIT in run time, there are surely many trivial call in that tree.
But, in the other hand, I don’t agree that simply because the network is slower then the processor make’s it okay to make ineficient code. If I am right and the JIT simplifies those call, then it is okay, but if it is slow but it is fast enouth to cope with the network then you have a system that don’t scale well when it’s hit by a slashdot mob.
I don’t use java very much, so I don’t know the answer, but there are many, many people that swear that this solution is good. So my guess is that the JIT takes care of much of what we’re seing here.
Markus, you are ignorant (sorry, had to).
See the posts above that explain that the stack above actually executes very quickly.
Being a senior developer I would hate going back to the 90’s where JDBC calls were executed from Servlets.
There has been an amazing evolution in decoupling, abstraction, maintainability AND performance improvements at the same time.
No need to go back.
David, maybe I am – I can well live with that.
And just like I stated, it’s a playground for developers – but a WASTE of resources. If I see those J2EE engines really dissipating GB of RAM, not matter how “fast” the code is actually executed, it’s bloat, necessary or not :)
I’m not blaming those who use it or develope it nor do I want to offend someone, it’s just my EUR 0.02.
Maitainability? Maybe for you – but digging every day through hundreds of exceptions without having sources ready it’s a nightmare compared to php, perl, cgi, .
Starting a portal application running on a J2EE taking about 12 – 14 minutes on a BIG 6- or 8-way SMP box, malloc()’ing GBs of RAM and throwing MB of human unreadable nested logfiles onto the filesystem is far away from beeing performant and maintainable – from an administrative point of view.
I would really love to discuss this – without starting flamewars or offending someone – in depth, maybe someone could convince me to also love this technology, but till then I will avoid it – and recommend others to do so too if they ask for my opinion.
—
a (still) frustrated administrator
#12 Markus,
It is the same thing everywhere in the industry. Developers love the stacktraces because it points out excactly where are their error, and administrators hate them, because they are not familiar enough with them to pinpoint the root-cause of the error.
Likewise, you will find that many Java-only developers have a hard time to read php/perl traces – I myself have worked extensively with both Perl and Java, but when I occationally fix a web-page in PHP then it will often take me a few more minutes to pinpoint an, simply because its not my main-field of expertise.
I dont recall any project I’ve been in, where it has not been like that :-).
As for memory usage, then then I imagine the example shown in this blog is not that bad. Spring favors using “singletons”, so usually very few objects needs to be allocated. The same thing should apply for good JEE servers, extensive pooling of objects.
Big applications will take alot of memory, portal applications could be (almost certainly are) an example of such. However, I dont agree with that it means its not performant and maintainable, thats excactly the benefits of using a standards framework as portals.
—
A happy developer ;)
Soren,
I understand your point – and you’re certainly right with your argumentation, that developers love them and administrators hate them.
I don’t have a problem with stacktraces in general, I can even well interpret C or C++ stacktraces to some extent.
So you think it’s worth the time and the resources to display some nice boxes in a webbrowser and to get some user interaction with such big bloat?
If I need to become a developer to be able to actually administer a system then there is something wrong – in my opinion.
If you had a car and you’d need to become a machinist just to get your fuel refilled, wouldn’t you ask if this was necessary or if this could not be done differently?
—
Markus
To me this diagram shows the success of Java. We have abstracted away the machinery of moving data and can create code that focuses on the one valuable thing; the business logic. And that single bit of valuable code that the business actually cares about is small – that is a good thing.
Administrators have a hard time with this decoupled model, this is true. And as the industry learns how to deal with this stack of frameworks the administration will become more rational and less diverse.
Making the business logic pure had to happen first, though, because at the end of the day, the administrators job is to make a system run, and the programmers job is to make a system the business wants. The business’ need to change will always trump the administrator’s desire for systems that are more understandable.
Having two or three lines of real business logic – you need two dozends layers of bloat before and after. IMHO this is not an improvement but the opposite and is showing me the real importance and necessity of that “bloat”.
There have always been dependencies, operating system versions, shared libraries, environment variables and such. This is all managable, if you don’t drive it insane. Another example for such almost unmanagable system is GNOME. I wonder why one does not put every function in a separate library, even that would make at least my work easier…
I was always under the impression (and I got tought so) that OO-programming should approach more to the real life objects with its attributes, parameters and methods, not in the contrary direction.
If I wanna get from NYC to Miami, I don’t book a flight over Prag, Paris, London and Tokyo just because it’s easier for the airline to book that flight.
Today I openend a call because we had again an exception on a J2EE – and I needed to talk to 5 people, the ticket was sent around all together to 12 persons before they found out, where the real problem was. That’s also a kind of abstraction, nobody’s appropriate but everyone develops even one more framework on top.
So much for managability and insusceptibility of abstractions.
—
Markus
I think Markus has it right. If there’s a problem here, it’s probably not processor cycles — if we cared about CPU time and stack depth, we’d be developing in assembler language.
Instead, the problem is that a deep abstraction makes it very hard to debug and performance-tune an application, because it’s so difficult to know what’s actually happening at the lower levels. database servers, for example, are not yet stable and predictable enough that you can just throw arbitrary SQL queries at them and expect acceptable performance — sometimes even switching the order of where clauses can result in a huge performance change (it shouldn’t, but it does).
That’s not to say that abstraction is always wrong. Using HTTP through the Java networking library, for example, is also a deep abstraction, but (a) it’s better understood, and (b) the unpredictability comes from the underlying network, and cannot easily be optimized by source code changes anyway.
Pingback: Aehso’s Output » When frameworks attack.
The only thing this call stack really shows is that there are so many truly magnificent frameworks out there for programmers that do their work using the Java programming language, that the developer can completely concentrate on the business logic.
With other programming environments, you might have to program all that infrastructure code (receiving and decoding HTTP request, security checks, dynamic wiring, data access layer) yourself.
Anyone concluding from that callstack that Java is slow, that the memory consumption must be incredulous, or that the code is not maintainable, is drawing the wrong conclusion.
Moreover, the development team uses all these open-source libraries because the chose to, not because the have to.
To David:
I would disagree with your point about debugging. Long stack traces are excellent for debugging. Short traces have less information. I’d rather debug a system built from many methods with descriptive names than a system with fewer longer methods with generic or misleading names.
Pingback: thoughts.on.code » Anatomy of a call stack
I’m not sure what the fuss is about… TO me this is a good set of abstraction, highlighted by the fact that the developer can put brackets around sections as to what environment their in (tomcat, acegi, et al). There is nothing in here that indicates code bloat, memory bloat, or speed. In fact I would say the opposite is likely true because its likely driven by very good design patterns.
Pingback: Ned Batchelder
Pingback: Birgers Blogg - på internettet! » J2EE Kallestakk
Pingback: » Complexity is not abstraction : Pensieri di un lunatico minore
Lars, (# 19)
You’re right – it’s a paradise for developers, no doubt. And for the maintainability – I’m NOT talking about developers here – I’m talking about the ADMINISTRATORS that need to deal TOO with those applications in their everydays work – and for THOSE people, those nested structures are just a nightmare – a complete mess – just read the example about the ticket I needed to open and you will get what I mean.
And for the memory thing: If you want to have a sheet of paper with a big X on it, do you really install Openoffice, Word or Photoshop onto a PC to JUST print that X on the sheet of paper? Of course, you MAY do more with that software, but if you want to just print an X, do you put all that big software package and the installation to do that? That’s what I mean with commensurability.
I don’t need a BIG BIG full blown J2EE engine to execute three lines of business code the same way I don’t need to install OpenOffice to print an X on a page. Just because you CAN do more with it, you don’t necessarily USE that.
Or spoken in C/C++:
If you want to use ONE function out of 100 MB static library, do you really statically link that library to your executable because you need ONE function?
And for maintainability: If some bugs on some application is fixed, I always need to deploy the FULL application again. In one case of our applications those quarterly fixes are now already 1,6 GB (!) of SCAs, EARs and JARs. Patching those applications is taking a LOT of time (about 6 hours on a 8-way box) – just to get fixes. If that is not a WASTE of resources and if someone is telling me that this is “maintainable” and “fast” then I really really don’t know…
—
Markus
Pingback: Does not compute » Blog Archive » That ain’t a stack trace, THIS is a stack trace
So – if I e. g. give you an SCA file, not much more information – will you be able to resolve dependencies and deploy it so it will run?
And what if it were about 25 of them, will you be able to deploy them in the correct order, will you know, if the J2EE needs to be running or not?
—
Markus
Pingback: Standard Deviations » Standing Firmly On A House of Cards
Pingback: Multicouches et stacktrace at Aurélien Pelletier’s Weblog
I don’t get the systems administrator complaints about java stack traces. I became a software developer after many years of systems administration. I had to deal with a multi-million line perl ecommerce system I was ultimately responsible to keep running. I became a developer after having to fix code that was failing in a production environment. It was very hard to tell what was going wrong in perl code.
The primary application I’m responsible for now is java. I do get complaints about “java junk” in the logs from the SAs, but I can just tell them to paste whatever “java junk” the don’t want to see in a bug report and I actually have enough information to do something about it. Nothing is worse than a bug report that is nearly impossible to reproduce with no information that will actually tell me what happened.
I’d think any SA would be thankful that developers can now provide them with enough information to actually fix whatever problems are affecting a production system.
That said, I’m still not a fan of a giant stack. As good as abstraction is, there are costs for each of those frameworks.
We’ve had bugs in struts code at customer sites. That’s one layer of abstraction that broke down and required us to spend quite a while in foreign code (and, unsurprisingly the part with the bug was quite complicated, difficult to understand, and not well-documented). We had a variety of problems with our commercial JDBC driver at the same time. They manifested themselves in very strange ways. A good chunk of the code executing within our application is our own. I’ve got a good idea what it’s like dealing with code breaking about a third of the way down that stack, and two thirds of the way down that stack with my code in the middle.
I know what it’s like to depend on a library that’s buggy, but no longer maintained. I’ll do it when the benefits are very clear, but it’s got to make things more than just a little better.
I agree with Markus – even from a developer POV.
The problem is not performance, the problem is the sheer amount of Boilerplate and infrastructure and and and in these gigantic frameworks.
The verbosity of java complicates the matter further.
The stacktrace is a symptom of an overdesigned architecture.
The thing here is about the trade-off between development time and complexity.
I would rather have well defined common frameworks, than use in-house frameworks that someone developed because he decided that his solution would be faster/better/lighter.
Developing frameworks isn’t an easy task and you would want to use one made by professionals, thoroughly tested and in common usage.
Also most people have the trouble discerning between what’s possible and what’s necessary. You don’t need to use every available framework if you don’t have the need for it, but if you want to develop quickly a scalable web application and forget about handling persistence, transaction and other things, you’ll quickly forget that the stacktrace looks a bit ugly.
“The Right Tool For The Right Job”
A semi-content Java Developer
to 29: I don’t wanna become a developer, I’m well done with my job, I just want to be able to run and administer the systems WITHOUT the need of in-depth functionality of each of the nested frameworks used.
31: It’s like the nice-and-nifty Microsoft RAD tools, create a project, generate some UI and point-and-click your application together. Same thing: Bloat, bloat, bloat and lots of dependencies.
Now Java does in fact the same, and everyone is happy, nobody cares about memory consumption (because you need at least 2 GB Heap for each of the server processes) – the memory is there – so why care about that?
Humm.. I don’t know if this is really a good direction…
—
Markus
Markus, complaining about an application needing 2GB of memory, like you do, is like complaining about Samba putting broadcast packets on the network; lame.
I’ve dealt with administrators who made huge issues of memory, disk and network usage that was no problem in practice at all, just in their state of mind. I’ve even dealt with admins who had huge problems with software simply because it was using technology X (subsititute with your favourite language/framework/os/tool/whatever).
Why is it so damn difficult to find professional administrators who not wish the world would still run on a PDP-11 in a 24×80 terminal screen.
Maybe I’ve simply met the wrong kind of admin people.
I’ll have a latte now. To calm down.
Mr C.
Coffee:
I’m not complaining about the memory issue, I’m claiming that the application running on it is WASTING it – just by design.
“Why is it so damn difficult to find professional administrators who not wish the world would still run on a PDP-11 in a 24×80 terminal screen.”
Nope, you totally misinterpreted my point.
I have the direct comparision of J2EE vs. non-J2EE with comparative
functionality in the application – with a tenth or less of resources and even maintanable for me displaying the same nice graphics in the browser and actually doing the same thing.
I have given a lot of examples why I think that this kind of runtime engine is bloat and from an ADMINISTRATIVE point of view unmaintanable.
I’m not blaming those who use it nor do I want to offend anyone nor do I say that “Java is crap”, I just don’t want to be forced start beeing a developer just to maintain basic functionality in the system – and actually I do need that.
—
Markus
Just a thought. I’m not sure where all this framework based development is leading us.
I’m totally sure that is impossible not to find at least a few errors in that enourmous call-stack trace. I mean, If you have to debug code in 1000 source code files to understand what is really happening, that is not the direction where I think we should begoing. The complexity each call is adding may be very little but imagine when you traversed 1000 calls, and I’m not talking here about performance/memory. Just complexity, which is the big problem here. SA and Dev people have to deal with complexity all the time and this is adding more and more all the time. What is the solution? Not sure, but I think we should start looking into these issues more closely if we want to keep some sort of real life experience out there :)
Cheers,
P.
Marcus,
I appreciate your position, I wouldn’t want to maintain a WebLogic cluster either.
However if using the framework saved 6 months of dev time, and there are 3 developers, that’s $150,000 in salary saved. Even if you have to spend another 10 grand on a new server, it’s still nothing compared to the labor cost. Even if you hire a full-time SA to run the servers, you’ve still saved at least 50k.
The same argument goes for Rails scalability: even if you need 10 new servers to run the app, it was still written by one guy in 2 months so you save $100,000 in labor.
It all may be bloated and run “slowly”, but who cares when the hardware is cheap and fast. Who cares if you need more machines to do the work? you need fewer humans. Economically, there’s no argument.
Pingback: Labnotes » Blog Archive » Can you abstract too much?
Pingback: benjismith.net » Blog Archive » It’s Turtles, All the Way Down
John (# 36):
I well understand your argumentation and it’s surely true, that you get real quick nice results when using this frameworks; if you need to write PHP or PERL code to archieve the same it will take you much more time, that’s for sure.
But on the opposite site, if you need to train your administrators to become an “at-least” developer to keep the system running and maybe hire another one, then the maintenance costs of such application will be the same or even more as if you had used a different technology.
This is becoming even worse when you have multiple platforms with different VM implementations offering a thousand -XX: options to run and tune your applications (talking about the VMs of HP (PA and IA64), SUN and IBM), including all the culprits of exceptions not to JIT compile that method/class and so on.
It’s a technology to achieve quick results – but I can’t follow your TCO argumentation; out of my 15+ years experience the MAINTENANCE costs exceed far those of the pure development, especially when you focus the average runtime of an application to more than a year.
I administer with one collegue 36 SAP ERP-Systems, including the databases, on different platforms – and I’m able to because of the clean design and architecture. Of course, there are happening errors too but they can be found and solved without beeing a developer.
Compared to this I have to administer 12 J2EE engines and this is a full time job, even more. It’s tremendously cumbersome to locate an error if you are not a developer knowing all those frameworks blindfolded; there’s e. g. not something like a “user context”, if an exception is thrown on a productive application where 1000 users are logged on, I’m unable to quickly find out, who produced that exception, I just see that “there’s something wrong somewhere” – and the time needed to fix that error (including opening tickets at the vendor) will VERY quickly drive the TCO costs to arbitrary amounts, especially, when the productive application will no more work correctly.
—
Markus
Pingback: Antti Holvikari » Blog Archive » Now this is bloat!
to #38 and #40
That story is really nice and absolutely and exactly reflects what and how I’m thinking about.
Things are becoming too complex and I’m sure, that even if I was a developer, I’d though not be able to find out the root-cause of errors occuring on one of those 12 engines, because I just don’t know how all those frameworks work and interact and if you have to manage 12 of them and keep them running, you don’t have the time to start debugging each of them in depth.
—
Markus
IMHO, one key issue is that Java (or C#, or C++) are general purpose languages, not suited for expressing business logic, or tailored towards an application format (web, embedded, server-side, client, etc). The same applies to the Java (or .Net) runtime, which provide just basic support for running object oriented applications.
For every requirement not related to business logic (persistence, distribution, transactions, resource management, authentication), code has to be written to deal with it, or a framework that provides that functionality must be adopted.
Differently from the proprietary 4GLs so popular in the 80s and early 90s, all this code that belongs to lower levels of abstraction and deals with so many requirements shares the same status with the core business logic code (there is no clear separation between infrastructure and user’s code). And business logic is (often) simple, so it should come with no surprise how little it looks comparing to the rest.
So, my point is that one issue here is more of information overload. Most of the time, we get way more information in stack traces than we care about, when all we are doing is debugging the business logic. If somehow all the infrastructure gory details were hidden, people would complain much less.
A related issue is since much of that plumbing code is not part of the platform/runtime environment, managing and deploying Java applications is harder.
At the same time, I acknowledge that many Java frameworks tend to be over engineered. Everyone thinks their part of the problem is important enough to warrant multiple JARs and 10 level deep stack traces.
My R$0.02…
How can you defend a solution with this many layers? !!! Inserting a record? Wow.
Easy development ;)
You stipulate the interface to the next level and let the developer(s) run. They can concentrate on that specific level and don’t need to take care about everything else. Later you “just” deploy it – and you’re done.
Perfect infrastructure for modular, distributed development and that is why all developers love and appreciate it. And as state above by another poster, it doesn’t matter if you need even bigger or more boxes, more memory, what counts is the result and the speed of development to achieve it.
BUT: BIG problems will arise if you have an error situation, maybe not even a real exception but regarding content. In worst case you need all people of all layers to find out, if the error is in their “layer” or not and who’s responsible to fix it.
The current ticket I opened is now at the JDBC level, after six days sending around to various people responsible for the various layers/frameworks in India, Russia, Israel, US, Germany and more. Everyone needs access and debug capabilities (on the production system) just to find out “our layer is right” and passing it over to the next one.
The main issue is, that all this doesn’t count – since no decision maker is taking those situations in their calculations – those are just based on “how much how quick for what amount of $$$”. Maintainability is at the third or fourth place, if at all, in their “less less less $$$” driven decisions and THIS is what urgently needs to be changed and reiterated again and again in their minds.
—
Markus
I find it ironic that this advanced website building feature …
… doesn’t even get the Mime type on the pdf file right :-)
Pingback: La Cara Oscura del Desarrollo de Software » Stack de llamadas en Java :: desde HTTP hasta JDBC
I agree with Markus on the fact that just because hardware is becoming cheaper, doesnt mean that you write programs that hog memories, use IDEs that require 1GB minimum to boot up. Thats ridiculous.
Pingback: kompiebutut's terminals :: To Much frameworks will “kill” you :: June :: 2006
Pingback: captsolo weblog
Pingback: developers.org.ua » Blog Archive » weekly linkdump
That is crazy. I made a stack trace of our java content management system called OpenEdit. Check it out: http://www.openedit.org/wiki/images/stack.png
Pingback: Kyle Cordes » Blog Archive » Long Rails Stack Trace
Pingback: refactr » Blog Archive » Problem: 80% Plumbing, 20% Business Logic
Recently I’ve performed a small test to show dependency between a call stack depth and a throwable instantiation CPU time.
See the blog entry:
http://www.jroller.com/page/ejboy?entry=expensive_throwables
@Markus
“to display some nice boxes in a webbrowser and to get some user interaction” I’d probably use PHP or Ruby on Rails.
To implement a high transaction, clustered enterprise service bus communicating over JMS, FTP, HTTP, RPC and SOAP I’d rather use J2ee and existing OSS.
Right tool for the right job.
You may or may not be right that the ultimate TCO is higher for a rapidly developed j2ee system versus a system built from scratch (in a low level language).
This is however something that I hope that managers with economical responsibility of Fortune 500 companies get right.
Sorry to hear that you are sitting at the Administrator’s end of a highly modularised j2ee app :-(
There are a lot of confused issues here. First is that the depth of this trace is in of and itself a problem. It’s very complicated and deep, but to get the same sorts of behavior through other means aside from abstractions, then the local code would be very complicated, or contain lots and lots of duplication.
Nothing is free; you can’t shrink this stack without adding complexity elsewhere. The argument over whether this complexity is -necessary- for a web application is entirely different.
The libraries I see there allow the code author to (more or less) not worry about network protocols, user login and role administration, database access, workflow.
It’s not as if this application threw this as an exception. -that- would be terrible.
All of these complaints about unintelligible log messages, poor application performance, etc…are really just saying “my developers are bad” or “my administrators are bad”. And yes, most developers and most administrators are pretty bad in understanding the downstream consequences of their actions.
Your developers should have some admin in them and your admins should have a lot of developer in them, and you both have to work together. It’s a fiction to think that developers can work in a vacuum and through the code over the wall to the admins to run.
Is abstraction bad? This trace shows about 11 more. I bet there’s -way- more than 11 more if you take it all the way to voltage levels in the hardware. Is that inherently bad? I’d say as opposed to being inherently bad, it makes computing as we know it possible.
So I find this stack interesting, enlightening, beautiful, and terrible all at the same time. I wouldn’t read anything more into it than that.
I want to know the details of How to print the stacktrace like this… thanks
Use the Netbeans profiler. Start the application server (in this case JBoss) with the extra JVM params required for the NetBeans profiler to connect. It will wait for the profiler to connect. Start the profiler session and the app server will boot. Then run the application, click on screens etc. (which may be very slow if you are profiling ‘all classes’). You can take a profiler session snapshot and save it to a file if you want.
To Rob/#58:
We are a USER of application software, we don’t develop them.
We expect to buy a portal application, get documentation how to run it and some (maybe even limited) documentation how to troubleshoot and actually use the time to customize it and fill it with data rather than to extend it by adding java coding.
That’s what I expect from a piece of software (like any other software as backup software, database engines or even a simple apache application).
We don’t have direct access to developers as we don’t have with Oracle, Veritas or any other software product.
The problem for me as administrator is, that
* there IS NO or TOO FEW documentation
* there is no source code
* trying to find out the error is VERY time consuming – due to abstraction of abstraction of abstraction – you just need too much people to find out what is going on where at what time.
We USE software to run our business, we’re not a technology testdriver nor are we developers that sell engines or software running on top of them.
So from a plain customer view I can’t ack your opinion since this would envolve developers – and we don’t want nor need them for our business. If it’s necessary to drive such application then it’s for us the wrong path – just what I was trying to argue.
—
Markus
Update 2006-07-29: A Ruby On Rails stack trace is also available…
Pingback: Brandon Smith » Framework Complexity
Pingback: Museum of Natural History
Pingback: Andy Pols’ Blog » A picture paints a thousand rants
—-
> The problem for me as administrator is, that
>
> * there IS NO or TOO FEW documentation
>
> * there is no source code
>
> * trying to find out the error is VERY time consuming – due to abstraction of abstraction of abstraction – you just need too much people to find out what is going on where at what time.
—-
Even if this is a bit late, I have to say, that the mentioned problems have absolutely nothing to do with the heavy use of abstraction in that particular piece of software.
I’d rather blame the developers for not documenting it well enough. In such a highly modularized system, it should be most important to give the user a comprehensible overview of the architecture. If that’s missing, not the system is bad, the people who made it are.
Pingback: Маниакальный Веблог » Скорость фреймворка
Pingback: Standard Deviations » J2EE Rant
Pingback: Zdot » Blog Archive » DB connection problems? Try a logging datasource!
Pingback: Wordpress 2.0 & Typo themes - Tim Shadel » DB connection problems? Try a logging datasource!
Pingback: Yariv’s Blog » Blog Archive » Museum of Natural History
Pingback: Detail-oriented Programming | Skydeck
Monster, monster…. It is disgusting to see Out of Memory on most Java and Microsoft Applications.
When the end user has paid $250 for his next-gen spanking new hardware and you have claimed 99% of it as your birth-right, there is the Out of Memory sign on the seventh driver license number he enters. How sick is that?
Most applications specialize on one database row at a time, if they think of user data at all.
Coming to think of it, the sum total of most small business data expressed as text will fit into the size worth a couple of such Java stack traces shown.
We ought to honestly evaluate what we are doing is right or is it hocus-pocus.
@Suresh:
Please have a look at this post
When are people like you going to understand that a long stack trace does not mean the end of the world!?
And BTW, the link to your home page ( banyansoft.net ) is down. I get this message:
Monster, monster… wtf?
After reading a few of the comments have have reached a conclusion. Programmers are lazy.
It is very nice when your development environment/language/framework goes and does a whole lot of work for you, you can write code and leave work at 3 PM: I promise you will be called back to the offices at 2 AM when something goes titties up…
Where is the pride? I am a Ruby (and ROR) developer and I am currently writing a XMMP server. Instead of sitting on top of Webrick/Mongrel I take the code and paste it into a new file. I then decide what I don’t need and delete it.
Mortal sin, I tell you, they taught us to always use libraries if we could. In reality, Webrick and Mongrel are brilliant for Rails/HTTP apps, but not TCP ones (even though I could come up with something), I would rather have complete control.
It’s always a trade-off, simplicity/development speed versus stability/memory/response time. The ideal language/framework in my opinion would land on the 50/50 mark. That stack trace looks like it’s a good 80/20, or worse.
C’mon! It’s what you learnt to do, and what you should love doing! Have some pride in the performance of your code! You will be much better off if you spend 30 minutes more appropriating code (and inlining yourself). That’s the power of Open Source, appropriation (just don’t go publishing those appropriated libraries so that we get libpng and jimmys-png and bobs-png and…).
From the Mongrel website (and very true):
“Complex things are more fragile than simple things. Your application is going down the same road as the Roman Empire, and just like them you don’t realize it.”
The author of the Mongrel website also recommends running *20* Mongrel processes per CPU. I would love to see a benchmark of Peter’s app under similar conditions.
Just my ZAR (South African Rand) 0.05…
Computers and programs were and are made/developed for commercial use (mostly). If money is a resource too then no issues if we have a bloat in code to have a lean money spend on a long run. I’ve been a developer who still likes simple code but not at a heavy price. We always need to weigh where we use Java or any other language. The usage perspective will decide whether code bloat is important or not. We dont crib if we have to wear a heavy overcoat when it is winter while a light weight one is either much more expensive or doesnt simple do the job.
To Sridhar and Jonathan above, let me repeat, stack-traces mean nothing. Let me quote myself from a comment on this post showing a Ruby on Rails stack trace:
Really interesting post!
Never stop iterating and don’t fear failure. Choose well-understood conventions where they will do to the most good , shortcuts you might take will cost you more to fix later than to try to get right up-front today.
Thanks , Zoli Juhasz
Pingback: Scaling Rails for Large Applications | lando.blog
Pingback: refactr blog on software development, design, agile processes, and business Blog Archive » When plumbing outweighs business logic 10:1
Pingback: Converting Java developers « Does Not Compute
Pingback: Tech Your Universe » PHP vs Java vs C/C++ for web applications
Interesting article! Check out more Java news, Java training links and Java blogs at JTraining – Java knowledge community.
JTraining
Java training
Java blogs
No wonder these applications run so slowly. Too much plumbing. Flexibility comes at a cose
nice share…thanks
Pingback: " Une" par Le weblogue de SeB
Pingback: Gmail’s Permanent Failure: Only Humans Can Build Software For Humans | BJD Productions Blog
Pingback: Gmail’s Permanent Failure: Only Humans Can Build Software For Humans | news
Pingback: Gmail’s Permanent Failure: Only Humans Can Build Software For Humans » Tech Reviews
Pingback: » Gmail’s Permanent Failure: Only Humans Can Build Software For Humans - Teched Up
Pingback: Gmail’s Permanent Failure: Only Humans Can Build Software For Humans « Articlepills.com
Pingback: Gmail’s Permanent Failure: Only Humans Can Build Software For Humans « Hubcom.org – Tech News 24-7
Pingback: Bichos y trazas « brucknerite – Un blog de Iván Rivera
Pingback: What are the pros and cons to keeping SQL in Stored Procs versus Code | Everyday I'm coding
Hello
I am thinking about taking prohormones, do you think this is good idea for advanced
bodybuilder like me? Bodybuilders are satisfied with the results after prohormones cycles, just google for –
prohormones factory – worth a try?
Pingback: How to: What are the pros and cons to keeping SQL in Stored Procs versus Code | SevenNet
Pingback: Fixed What are the pros and cons to keeping SQL in Stored Procs versus Code #dev #it #asnwer | Good Answer
Pingback: Answer: What are the pros and cons to keeping SQL in Stored Procs versus Code #it #dev #computers | IT Info
Reblogged this on Marius reshares.
Reblogged this on quuux.
Pingback: Ugh – kronikaparanoika
Pingback: What are the pros and cons to keeping SQL in Stored Procs versus Code [closed] - QuestionFocus
Pingback: New top story on Hacker News: Java call stack – from HTTP upto JDBC as a picture (2006) – Tech + Hckr News
Pingback: Java call stack – from HTTP upto JDBC as a picture (2006) – f1tym1
Pingback: Frameworks: Because stack traces should be 60 lines long
Pingback: New top story on Hacker News: Java call stack – from HTTP upto JDBC as a picture (2006) – ÇlusterAssets Inc.,
Pingback: What are the pros and cons to keeping SQL in Stored Procs versus Code [closed] – Knowleage Exchange
Pingback: NODE.JS – Single Threaded và Event Loop | tiktok.vn
no matter how ugly it looks, there’s no avoiding it, because in order to create a good product, we need to divide our code into functions,
at the hardware level, functions are cached in l0 l1 caches, so if a program is plainly processing something small like integers, longs, doubles (pointers)etc, it’s fine as there are less cache miss-es in that case, but if a string is declared in every function there is, we have a huge problem…
so in the end, it depends on the programmer, calls in itself are not costly in new hardwares..
Pingback: [C#] Stored Procs와 Code에서 SQL을 유지하기위한 장단점은 무엇입니까? - 리뷰나라