This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
From what I've learned, the output window and some other parts of nb, just passes thru information it receives from outside processes. Sometimes this information might be in other encodings. And we've seen other issues, like 18866 or 20331 for output window, or some other issues for exception window, where the information is not shown correctly, so the user cannot understand what the messages are saying. This request is for some amount of encoding detection to be done for output window and other windows that might show messages from outside processes; we think this would be very helpful for Japanese users. ken.frank@sun.com 03/27/2002
Target milestone was changed from '3.4' to TBD.
While I agree that it might be usefull I don't see any way how to detect the encoding in which the input data are comming into NB platform. So I am marking this issue as wontfix. Please reopen only in case someone is able to supply a suggestion how this could possibly be implemented.
Several high level suggestions obtained from some i18n resources: 1. Is there a most common group of outside processes whose data is received by nb and shown in output window and other places ? (like from app servers and databases ?) If so, find out what the data transfer encoding is of those and then have encoding conversions done using java api. 2. Often, if user is running in ja locale, it can be probable that messages from outside process might be in some ja encoding, but just not want nb is running in or utf-8. In this case, there is a Japanese autodetect converter in jdk which might be able to detect between sjis, euc and iso2022 ja encodings, and that, combined with possibility that it is encoded in utf-8, might solve many issues at least for Japanese encodings, and at moment, Japan is only localization done for FFJ. 3. If xml is used as data transfer, the xml standard specifies how to detect the encoding. ken.frank@sun.com 07/30/2002
More information on using Japanese encoding detection provided by java.io.InputStreamReader. This can detect if its euc or sjis or iso2022 encodings. Request JISAutoDetect as the character encoding for this call. It's not yet in java.nio apis yet. See http://java.sun.com/j2se/1.4/docs/guide/intl/encoding.doc.html and the various java apis. If encoding is in utf-8 already, is there a way to detect that or might that be the fallback if its not in one of the Japanese encodings ? ken.frank@sun.com 07/30/2002
As David has requested below, I'm reopening since have provided information related to doing encoding detection. ken.frank@sun.com
*** Issue 23040 has been marked as a duplicate of this issue. ***
Changing to defect after consulation with nb QA and comments from nb strategy that some i18n rfes could actually be viewed as defects. Let me know if more details are needed. ken.frank@sun.com
Ken - this is an enhancement request IMHO. I don't agree with marking this as defect - there is some functionality missing, not wrong. If you think that this really should be done, you will have to find resources for this feature to be done. Cc-ing Jan Chalupa from the QA to be aware of the problems. As for the details provided - I haven't evaluated them yet - I would do that only after it is decided that this will be implemented for the next release.
Consistent use of the I18N keyword.
Changed owner David S. -> David K.
changing owner dkonecny -> pnejedly
output is better component IMO
output module is always passed Strings from the api users. internally it stores the information in UTF-16. It's generally the responsibility of the codebase that reads the file to figure out the encoding right. Since 6.0 we have FileEncodingQuery tat could help with it. Marking as wontfix.