I have only done one completely virtualized implementation of AX 2009 (including SQL Server 2008 EE x64). This solution is not yet in production, but in the meanwhile I have looked at several other virtualized implementations done by other partners and often operated by another vendor (hosted).
One of them (AX 4.0) consists of 2 AOS servers in a normal AX load-balancing cluster. The interesting part in this setup is that one server is a dedicted, physical server with DAS, while the other is virtualized (ESX 3.5). When talking to the users, they complain about overall performance when having sessions against the AOS instance on the virtualized server. I suspect that the most clever users always looks at the caption in the main AX window to see which AOS server they hit (and maybe they also start one additional session immediately to hit the physical one followed by closing the first session against the virtual). Everything seems ok from the server console (PerMon, TaskMan, Event log etc.), but I also feel (subjective) that the loading of the AX client also takes some more time when hitting the virtualized one.
Again I have to talk about complexity. Ideeally AX AOS should be the perfect candidate to virtualize since it's a CPU, memory and network intensive process. The AOS server will never pay a high physical I/O load and disk I/O is in general low (of cource some batches could impact this in AX 2009). And AX 2009 is now fully supported on several virtualized platforms. So exactly why do I bring in complexity again? As with Storage Area Networks, a virtualized environment is simple to utilize (when it's working as expected), but the technology and the levels of infrastructure behind adds a lot of potential sources when things are not running as expected (most virtualized environments also utilizes a SAN). I guess the odds for getting down to all the needed details is a lot better when implemented at a customer site, but customers nowdays tend to outsource this since "it's not part of their core business". In this scenario, the complexity is very visible since the AX partner (or the consultant doing the performance audit) don't even get all the details or access to the parts of the system necessary for defining the big picture. Seen from the hosting partner, it's all about utilizing the underlying platform as much as possible trying to maximize the potential of the platform. This often means mixing customers on the same platform (logically isolated on every level), but ulitmately sharing the exact same resources at a certain level. This again typically leads to traditional bottlenecks, but they are well hidden for both the customer and the man in the midle.
So what's the lesson here? Always differentiate between hosted solutions and locally implemented ones! The main principles are the same, but it's cruical to bring in the hole army of partners in the planning when the solution is operated by a hosting partner. Without this, you are basically left on your own and you'll probably never be able to see the big picture consisting of all the details, to judge where the reall issues causing the problems are hiding. I guess this is true for every application or solution.
Subscribe to:
Post Comments (Atom)
1 comment:
Can anyone recommend the top Script Deployment software for a small IT service company like mine? Does anyone use Kaseya.com or GFI.com? How do they compare to these guys I found recently: [url=http://www.n-able.com] N-able N-central configuration management
[/url] ? What is your best take in cost vs performance among those three? I need a good advice please... Thanks in advance!
Post a Comment