This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
[ BUILD # : 200606210200 ] [ JDK VERSION : 1.5.0_06 ] I have three Sun App Server 9 like servers installed and registered: - Glassfish b48, with default admin/adminadmin user and password (stopped) - Glassfish UR1, with default admin/adminadmin user and password (stopped) - Sun App Server 9, with custom user/password (running as service). Whenever I ask "debug main project" (that is a mobile application with no dependencies set to any web app), NB 5.5 ask password for two stopped servers.
For soem reason, debugging mobile apps trigger J2EEServer module calls. Whne the name/password dialog was up, I did a thread dump. Here is what I see: at org.netbeans.modules.j2ee.sun.share.management.ServerMEJB.invoke(Serv erMEJB.java:91) at org.netbeans.modules.j2ee.sun.ide.dm.SunDeploymentManager.getTargets( SunDeploymentManager.java:580) at org.netbeans.modules.j2ee.sun.ide.dm.SunDeploymentManager.isRunning(S unDeploymentManager.java:904) at org.netbeans.modules.j2ee.sun.ide.dm.SunDeploymentManager.isRunning(S unDeploymentManager.java:876) - locked <0x05885440> (a org.netbeans.modules.j2ee.sun.ide.dm.SunDeploym entManager) at org.netbeans.modules.j2ee.sun.ide.j2ee.StartSunServer.isRunning(Start SunServer.java:586) at org.netbeans.modules.j2ee.deployment.impl.ServerInstance$3.run(Server Instance.java:576) at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:499) at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java :932) So why at org.netbeans.modules.j2ee.deployment.impl.ServerInstance$3.run is called when you debug a J2ME APP? This call is triggering some admin calls on the backend and if for some reason the password is not known, then there is this dialog which is prompted. Movin to correct category for more analysis.
When a new debugger session is being created the j2eeserver checks whether this session belongs to one of the registered servers and if it does, it starts listening to that session changes and updates the server node respectively. This is how we can determine that the server got suspended and then hide the server node subnodes and disable server state management actions. Reassigning back to the sunappserv9 module to decide what can be done about this. BTW, do we really need to know username and password to check whether the server is running?
Yes the logic is that isrunning returns false when u/p is not good. The reason for that is that if we return true, then the entire logic for j2eeserver is broken: there is no state management for a running server, but without ability to talk to it because the password is wrong. So I still believe it is a j2eeserver lifecycle bug. In anycase, I do not see a fix for 5.5 without changing j2eeserver module
Is the service on the same ports as the other two domains... Can all these servers be started at the same time? It seems like moving the service to a different set of ports might address this since... the dialog should not appear if the domain's admin port not accepting connections.
since multiple servers could be defined on the same host/port combo localhost:4848, the serevr needs to be contacted, to verify that the correct instance is being flagged as running. To figure this out, we have to ask the DAS, where are you installed. This question has to be authenticated to get an answer. If all the instances on a developers machine have the same u/p OR live on unique ports this situation will not happen.. It appears that the user has multiple servers configured to the same port with different u/p. I think we would need to stop the layer that opens the authentication dialog from doing so, and assume that the bad auth. meant... I am not running... but that puts users who are security concious in a pickle, because they would never get told that their server was running. I think the work-around is to: 1. put all the instances that live on a host/port combo on a single user name password OR 2. put all instance on unique host/port combos on a single machine. Since there is a work-around, i am going to lower the priority on this one and set the TM to future.
I don't know if this applies to v3. I will raise this to p3 if we come across a case where it does occur, since we are unlikely to address this in the v2 plugin.
examine after 7.0