Somehow we lost the ability to set plain text file's
character encoding. There used to be Property for this,
now there is Encoding property for Java source files, but
there seem to be no way of setting encoding of plain text
Is "Text" pane missing from plaintext file properties??
The encoding property exists for java files only. Other files must be
handled by setting the global JVM-level encoding property -Dfile.encoding.
I'm not aware whether there is any effort being done in this area
(encoding cookie or similar) but transferring to core for
evaluation.The editor module itself is not the right place to do the
possible conversion (choosing the right byte-to-char-converter) as it
should only get the reader with the proper encoding.
Reassigning to openide/editor, please try to evaluate, thx. I don't
know enough details, but could it be that in text module we will do
similar functionality like in java...or would it be wise if
openide/editor supported encoding selection for any document type?
Jan Hlavaty, in which version did plain text encoding property
I think it would be very desirable to do it for all the mime-types in
a common way.
The editor kit part could then obtain the java.io.Reader with the
I think there never was Encoding property. Closing as duplicate.
*** This issue has been marked as a duplicate of 19928 ***
Reopening. Please see the discussion in issue 19928. It does not look
we will be able to implement this properly soon in plaltform. And so a
solution similar to one in Java module should be done in the meantime
for plain text files.
At the moment I'm not sure whether this is editor module issue or text
module issue. Please reassign if necessary. Thx.
Needs to be impl in text module. Compare java module's JavaNode prop,
Util.setFileEncoding, and of course JavaEditor.loadFromStreamToKit and
saveFromKitToStream. It doesn't look particularly complicated to me; a
bit more work in java module just because it needs to be passed to the
Any progress on this? I need to edit files which are in UTF-8
It can't be too complicated to do this.
I'm pretty sure there was a generic character encoding property on
file nodes in explorer tree view in previous versions.
Maybe some magic autodetection would be nice - like looking
for "encoding=UTF-8" (or encoding=<anything> or charset=<anything>)
string in the first few lines of text file. I could then place
encoding declaration into comments of some sort.
Issue 42638 if approved could make implementation of this one simpler.
It's currently not possible in NB to save a text file in Unicode or in
any other formats except the default encoding. It's essential to have
a file encoding option for text-based files (.txt., .js, .html, etc.).
In NB 5.0 beta 2, we still have no way to specify encoding of plain text files
(documentation, readme files, licenses, configuration files...) Platform default
encoding is not appropriate for projects that are to be worked on by many
developers with different default platform encodings (where a fixed encoding
like UTF-8 would be much better). This is a major shortcoming, can you please
look into this?
Comment from a (very bored) netbean user:
This is essential and my feeling is that the problem is more sever in
netbean-5.0-rc2. I would recomment to have the possiblity to set the mime type
AND encoding for all projects files (perhaps the java source code is another
I'm doing professional J2EE coding with tapestry as web front library. With
tapestry, you are using XML description files for (template) html pages and
components. This files are XML but are in files named *.page and *.jwc
respectively. It is common to have a utf8 or 8859-15 encoding here because
this files are engaged in http form input validation and because of this there
are often regex in the files that limit the input for example to [a-z]|[A-Z]|
[äöüÄÖÜß€] or something the like.
In netbean-4.1 it was bad: You couldn't edit this files but if you opened
them, they wasn't harmed. In netbean-5.1-rc2 it is worse: you couldn't edit
this files and when you open them, they are DESTROYED by coverting all
'unknown' characters to '?'. This is done even when my default encoding
contains 'äöüÄÖÜß€' !!!!
I fiddled around a bit with a start script like this:
/usr/java/netbeans-5.0-rc2/platform6/lib/nbexec --userdir \
but with absolute no success. The default encoding seems to be ignored.
At the moment the is no support for having xml files without an *.xml file
This is an extract of an *.page file as an example:
<?xml version="1.0" encoding="UTF-8"?>
PUBLIC "-//Apache Software Foundation//Tapestry Specification 4.0//EN"
<inject property="mandantDao" object="spring:mandantDao"/>
<set name="required" value="false"/>
<set name="pattern">"([0-9]|[A-Z]|[äöüÄÖÜß]|[a-z]|\\ |\\-|\\.|\\(|\\))
<set name="errorMessage" value="ognl:messagesUtil.nameValidatorMsg"/>
Strange - overriding of the default encoding should work (though not supported
officially). Could you try -J-Dfile.encoding=UTF-8 please? Thanks.
Problem with this is that you assume a single, global encoding - which is often
not the case! We need to be able to specify encoding of any text file with
better granularity - ideally per file.
I know what the problem is well, what I was wondering about was that overriding
of the JVM's file encoding property did not worked as expected.
good example of user experience,
I think that text file encoding (including .java, .txt, .js and etc. - that can not
have encoding info in it) should be defined per
- entire IDE on option dialog
- project based on project property dialog
- file based on file property dialog
Then if different encoding other than system encoding is set, compiler option
should be changed properly e.g. adding -encoding UTF-8 according to text
Obsolete milestone, please reevaluate
6.0 has file encoding support (of a different sort than prior releases).