If you are planning on upgrading your WebSphere Servers to a later fixpack level, be sure to start with the Deployment Manager. If you do not, your Deployment Manager will not be able to contact your ‘future level’ node agent.
“Server cannot be started because the node agent for server Node1 on node x is not active”
Attempting to call syncNode to the deployment manager will result in:
ADMU0127E: The version of the Deployment Manager is earlier than that of this node. Deployment Manager version earlier than that of a node is an unsupported configuration.Upgrade the Deployment Manager to the same or later version as that of the node.
But after updating the DMGR, could we update one node, restart it and then stop and update the second node? (In this case each node had a specific runtime)
Of for sure, once you have updated the deployment manager, you can then perform your upgrades by taking one node down, upgrading, starting and then taking the other node down.
There’s no issue with having different levels of WAS inside your cluster, it’s just that the DMGR always has to have the highest fixpack in the group.
Ok, thanks for your answer, I was wondering about this subject because all the IBM software consultants we met told us that there is no way to avoid the production interuption
Well the whole point of having a cluster is for failover and high availability (assumping an http server in there somewhere). If you have two servers in a cluster, you can take server1 down while failover will redirect traffic to server2.
Of course but the node agents communicates, our consultant told us that the must run the same software version same FP and same ifix to avoid problems…
After updating one node, we have to start it before stop the other, during this time (could be few seconds) our cluster members are not running the same version