This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
The context menu for the test case needs to be appropriate to the test case as a "thing", not as an arbitrary folder on disk. For example, the test case should not have the action to create new files and folders. The action "Test" should probably be renamed "Run Test".
Also, part of this design needs to take into account the following: - Node for input (e.g. "Input") as a logical node rather than just a file node. For example, it should probably not allow general file actions like "XSL Transformation..." - Node for output (e.g. "Output") as a logical node rather than a file node, same as above. - Recording and presenting test result outputs (both positive and negative) without needing to flip to the files view of the project. There are two constraints here: 1. The test results outputs must be stored in a non-versioned location under the project, not in the test result folder itself. The node under the test case should effectively be a link to these results. 2. Must support multiple results. Each result should be presented with an easily readable timestamp, and possibly an indicator of whether it was a success or failure. A strawman design: - MyCompApp ... - Test Scenarios + Test 1 - Test 2 Input Output - Results 08-01-2006 20:00 - Failed 08-01-2006 20:01 - Failed 08-01-2006 20:02 - Success ... ...
Also there should be no need to "cross reference a number in output window to a file in the filesystem tab" while trying to reach the test output. The design of logical nodes covering test case input, output and result should consider this use case.
Let's talk about the use cases & scenarios a bit now ... UC1: Create a Scenario - through New File wizard available on Test Scenarios and Project node - Question: Should the input file be automatically opened after creation? UC1.1: User wants to initiate the newly created scenario with a result from a running service UC1.2. User wants to create the expected output manually Question: How should be UC1.1 and UC1.2 fulfilled? Should the output.xml be opened automatically after scenario creation? Should there be an option (in the editor) to pull down expected output from the service? UC2: Run Test Scenario UC3: Delete Test Scenario UC4: View results of a Test Scenario Questions: Should the results be automatically visible after running the test? Only when failed? Should the most recent results be accessible from the Test Scenario node? Do users expect something like Diff of result vs expected result? UC5: Delete one or more results The results are stored on users system and exposed in the project explorer (for viewing purposes); user may want to get rid off old results that are not interesting anymore. - User selects one or more test results and invoke Delete action from the contextual menu.
Just a couple of comments: UC1: I think a nice feature here would be to generate a partially populated input document. There is currently code completion for instance documents, but it would be nice to have the ability to generate a valid skeleton input document at this point and then customize. UC4: I think using the Junit model for test results and execution is a good target.
UC1: It seems reasonable to open the input automatically. Output should not be opened automatically because it's blank on initial creation. I suggest creating the empty output file on test case creation as it's done now. This creates a node under the test case that the user can open and customize if he wants. However, if he doesn't customize it, when a testcase fails because there is an empty output file, prompt the user, for example: "The expected output file for the test case was empty. Do you want to save the most recent output as the test case's expected output file for comparison during later test runs?" The problem that we need to overcome and that this solves is the lack of clarity in the fact that the first test run fails for no obvious reason. In my opinion, it would be desirable if the "Run" action in the test case context menu changed if the expected output file is blank to indicate that the action would be to pull down an expected output. However, the project must be deployed first to make this possible, and I dont think it's feasible to deploy the project as part of this action (for example, what happens if the deployment fails?). Therefore I don't feel this approach is reasonable at this point, and I prefer the proposal I made above since it's just an incremental addition to the current behavior. UC4: The results as nodes should be appened to the list of results immediately. They should not be opened automatically because that will cause a huge number of extraneous files to be opened. A diff of a particular result with the expected result would be desirable. However, this needs to be an XML diff. A textual diff is not that useful. Currently, there is no facility for doing proper XML diffs in the IDE (IIRC). UC5: Agree. Other issues: Please note that there needs to be some attention on the New Test Case wizard. It has a few problems, including that it's resizing the wizard dynamically and is creating new test cases by appending zero to them. It also is not following NB UI conventions and may have some icon issues. Also, we need to normalize the text for "test case" throughout the UI.
We could probably change the "Run" action in the test case context menu to "Initialize test case" if the output.xml is empty/missing, as Jiri suggested before. When a test case fails, we would like to provide some hyperlink from the junit output tab to the actual file, but I don't think JUnit module currently provides such support. Since the actual output will be made visible in the logical view, this might not be as big issue as before. IZ 78146 talks about default test case. Is default test case really useful? How does one invoke the default test? The "Test Project" action on the CompApp Project node invokes all the test cases.
While I agree that we could change the action to "Initialize test case" and go in that direction, it begs the question of why this would need to be done by the user. In other words, why shouldn't it just be initialized when the test case is created? As a user, I don't want to create something and then have to also initialize it as a unilateral first step to using it. I think the answer to why you'd need to do this is that the project must be deployed before the test case can be initialized. Deployment is not a trivial thing, and so the instinct is to defer initialization onto the user. Assuming that making the user do something the computer should do for him is bad, this is why I suggested taking the approach of creating an empty output file with the assumption that the user will fill it in, but if he doesn't before he invokes the test case, we will help him if he tells us that's what he wants.
Sounds good. A couple more questions: Instead of showing "Success" or "Failed" on the actual result node, it is probably better to use some badge on the node icon to indicate the status. Is there any badge in NetBeans that indicates success or failed status? I think whether a test case run is successful or not should be a static thing. This status should not be computed on the fly based on the then current expected output.xml. However, since the user can change the expected output.xml at any time, a previously successful run might show some difference against the newly updated expected output.xml. Is this too confusing? Should we recompute the status when the expected output.xml get changed?
It's probably overkill to put "Success" on every node, but "Failure" would be appropriate. We need redundancy in the UI beyond just icons, both for accessibility but also visual ease of navigation. So for example, I can imagine this sort of thing, where "x" and "y" are different icons: - Results (x) 08-01-2006 20:00 - Failed (x) 08-01-2006 20:01 - Failed (y) 08-01-2006 20:02 (y) 08-01-2006 20:03 (x) 08-01-2006 20:04 - Failed I agree that the success/failure should be static; I don't think it's necessary to compute the status. In fact, it might be incorrect to do so because users will use that information to find past test cases. If results have a timestamp it should be enough information for someone to find what they want and not get confused, IMO. Also, I didn't mention, but if you use a timestamp for the node label, the format needs to be i18n'd so it shows in local format. You should be able to use MessageFormat to do that.
I have an idea about the first run problem. What if we don't treat the first run specially? We can let the first run fail and generate an actual output file, just like any other run. We can add a new action on the test case actual result node to save it as expected output for later runs. This way, the user is free to overwrite the expected output herself at any time.
Output Creation --------------- I've put your comments related to the output message creation together, added my position and here is the result: * Keep the "Run Test Case" action as is (no replacement with "initialize ...") * Let's cover some of the use cases with contextual menu actions on the test case node, results node (and perhaps on the input and output nodes) - including pulling down an output message from the server and creating a skeleton content * Also on "Run Test Case", if test fails and output is blank, let's offer the user to "Save Result as Output". Node Structure #1 ----------------- In regards to node structure - Do we need extra nodes for Input, Output and Results container? Would a structure like the following be sufficient - or more optimized for cases where user rarely visits input or output but spends much more time with results? [ ] Test Cases -- [ ] TestCase1 -- [ ] Result1 -- [ ] Result2 The test case node would then have actions as follows: --- Run Test Case Open Input Open Output Create Skeleton Input (opens for editing) Create Skeleton Output (opens for editing) Use Recent Result as Output Delete Results --- Node Structure #2 ----------------- Or if that is overoptimized, maybe the following would be enough: [ ] Test Cases -- [ ] TestCase1 -- [ ] Input -- [ ] Output -- [ ] Result1 -- [ ] Result2 Other comments -------------- * A DnD of a result node onto the test case node could replace the contents of output with the result (with a confirmation dialog) * Should we put more effort into UC5 - maybe user does not want to delete all previous results case by case. Maybe he or she wants to start off by removing all the previous results and running all the tests. Should there be an action like "Delete Results" on "Test Cases" node? * "Add Test Case" action on the "Test" node is inconsistent with the title of the wizard "Create New Test Case". I think we should follow the "New File" syntax here and rather name the action and the wizard as "New Test Case".
Node structure #2 gets my vote. Yes, we can add "Delete Results" on both "Test Cases" node and individual Test Case node. "A DnD of a result node onto the test case node could replace the contents of output with the result". This is convenient, but I don't know if this is obvious to the user. Is there such behaviour in NetBeans? Anyway, I think we probably still need to provide some node action. Do you prefer "Use Recent Result as Output" on the test case node or "Save as Output" on the actual test case result node? My vote is for the latter one because this is more flexible. The user can set any actual result as output, not necessarily the latest one.
I agree that structure #1 is oversimplified, so let's pursue #2. Although DnD behaviour is not crystal clear, it is a possible big productivity enhancement with low risk of data loss or confusion, so from usability perspective there's no blocker. Recently a question came to my mind: What is the importance of the historical results? Are these important to the users or are these rather involved in corner cases? Should the latest result be somehow prominent? Would something like a single "results" node with a specialized editor with "history browser" be a better solution? I understand that it might be too late for such ideas in this phase, but maybe this input can be used somehow. Besides this, I think that both of the contextual menu items (use recent / save as output) would be useful. To sum this up, the contextual menus should go like this (comments welcomed): On "Test Scenarios": Add Test Scenario --- Paste (what is the semantics?) On "test scenario": Run Diff (what is the semantics of this one?) --- Delete Delete Results On "Input": Open Create Skeleton Input On "Output": Open Create Skeleton Output Use Recent Result as Output On "result": Open Use as Output
We can probably remove the copy/paste functionality under Test Scenarios Node for now. In the future, I think it is convenient for a user to be able to copy/paste existing test scenario and then modify some test properties without going through the test case creation wizard. The Diff action on the Test Scenario Node shows the diff view between the expected output and all the actual result (one at at time of course, with the help of a drop-down list). If I understand it correctly, your "Open Input/Output" opens the file in read-only mode, and your "Create Skeleton Input/Output" opens the file in editing mode. Can we just use "Edit Input/Output" instead? I see your point of a single "results" node with specialized editor. We can probably flesh it out after coke FCS.
There is a bit misunderstanding, let me clear it out. What I'm suggesting is: On "Input": Open - opens the file input.xml for editing in XML editor Create Skeleton Input - overwrites the contents of input.xml with a skeleton document and opens it for editing in XML editor On "Output": Open - opens the file output.xml for editing in XML editor Create Skeleton Output - overwrites the contents of output.xml with a skeleton document and opens it for editing in XML editor Use Recent Result as Output
I don't see why we need the "Create Skeleton Input/Output" actions. The skeleton input will be generated automatically for you when the test case is created. And there is no way to know what a skeleton output would look like without running the test case. Am I missing something here?
And one more question. We use badge to indicate incomplete test case with empty output file. But in some rare case, the expected output of a test case is supposed to be empty. What do we do then?
Hm, I thought that both the input and output skeleton messages could be created according to the message type (and thus complex type / element) defined for the tested operation, no? How do we find out that the output is supposed to be empty? Is that according to the operation signature? I think that in that case plain icon without any badge will be appropriate.
You are right. We can also generate the output skeleton based on the operation definition. If we generate skeleton output automatically during test case creation, there seems no need for the empty output badge any more. If the output file is empty, then it is supposed to be empty. If we generate both input and output skeleton automatically during test case creation, do we still need the "Create Skeleton Input/Output" actions? (The only use case I can think of is that the user modifies the WSDL after the test case gets created. I am not sure if that is a use case we want/need to support. Or maybe you are thinking about something like the user screws the files very badly and wants to recover?) Also, if we create the skeleton output (whether automatically or through the action), the xml diff tool has to be really good. Here is an exmaple (note the namespace and Header element difference). This is the generated skeleton output: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:syn="http://xml.netbeans.org/schema/SynchronousSample"> <soapenv:Body> <syn:typeA> <syn:paramA>?string?</syn:paramA> </syn:typeA> </soapenv:Body> </soapenv:Envelope> , and this is the actual output: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"> <SOAP-ENV:Header/> <SOAP-ENV:Body> <typeA xmlns="http://xml.netbeans.org/schema/SynchronousSample"> <paramA>?string?</paramA> </typeA> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
Per Bob’s request, I am writing a summary of what have been agreed on in this discussion: o Input and Output node are logical nodes rather than file nodes. o Input.xml should be opened up automatically when new test case is created. o Output is initially empty. After the first run, the user will be prompted to overwrite the empty output using the actual result. o Multiple results should be visible in the logical view. Each result is represented with a timestamp with some indicator whether it is successful or not. The result should be sorted (based on the timestamp). o The successful/failed status of test case result is static. It shouldn’t change because of changes in Output.xml. o A DnD of a result node onto the output node should replace the contents of output with the result (with a confirmation dialog). The contextual menu goes like this: On "Test": New Test Case Delete All Results On "Test Case": Run Diff --- Delete Delete Results --- Properties On "Input": Open On "Output": Open Use Recent Result as Output On individual result: Diff Use as Output --- Open Delete
Created attachment 33342 [details] New UI Design for Test Driver (Icons are temporary)
The user can replace the output file with the actual result in several different ways: * Right click a result node, select "Use as Output" from its context menu * Right click the output node, select "Use Recent Result as Output" from its context menu * (Drag a result node and drop it onto the output node) The third one will be a future enhancement and will not be available in the upcoming FCS. IMHO it is pure icing on the cake.
Per CAT request: Icons will be delivered by UI freeze (by 15 September). I still see possible usability issues in the workflow related to the output.xml, perhaps that could be resolved as a bug fix. So I'm assigning to Jun. The issue I see are the following: * Will the user understand that he or she need to run the test case first to get the output? * What about the use case when the user wants to fill in the output (using a skeleton) manually, instead of using the existing output? Is it a frequent use case? * Jun, I don't quite understand the skeleton that you provided. I expected that in an actual output the parameter ?string? would be replaced with an actual value. And then, what if the parameter is of a complex type or element? Would we create a skeleton structure for that type? I thought yes (and this would add importance to the create skeleton output use case).
In the example I provided, I didn't change "?string?" in the input file, that's why the actual output contains "?string?". As I said, until we have a very good xml diff story, it's probably hard for a user to compare the skeleton output with the actual output. I agree that new user might not understand that he or she needs to run the test case first to get the output. Since we already provide the badge and tooltip to notify the user the output file is empty, do you think it will help if we add something like "run the test case to generate the output" in the tooltip?
Jiri, I'd like to close this one ASAP. Any final comments?
Sure, please, yes, you can close this. In my previous post by "perhaps that could be resolved as a bug fix", I really meant that we can resolve my concerns after this is closed. Assigning to Jun.
Great!
Reopening for a while: There is a pop up dialog of 'first run' asking the user whether he or she wants to use the last result as expected output. The pop should have Yes / No buttons instead of OK / Cancel.
81649 is a closed Jackpot issue -- did someone mean to enter a different blocker?
Tom, I thought this issue is not related to Jackpot in any way. At least my final comment - so, please disregard this. It seems to me that there is a bit of mess in IZ dependencies, perhaps that's why you were notified.
First run confirmation dialog is fixed now.
verified 2006-09-19