next ISO C++ standard

S

subramanian100in

Current ISO C++ standard is ISO C++ 1998. Am I correct ?

When is the next standard expected ? What is the URL for knowing it ?

Will it contain libraries for network programming as part of the
standard library ?.(just as data structures and algorithms were
included as part ISO C++ 1998 standard library).

Kindly clarify.

Thanks
V.Subramanian
 
V

Victor Bazarov

Current ISO C++ standard is ISO C++ 1998. Am I correct ?

No, the latest standard is 2003.
When is the next standard expected ? What is the URL for knowing it ?

Committee home: http://www.open-std.org/jtc1/sc22/wg21/
Will it contain libraries for network programming as part of the
standard library ?
No.

.(just as data structures and algorithms were
included as part ISO C++ 1998 standard library).

Kindly clarify.

Kindly find out for yourself.

V
 
J

Jerry Coffin

Current ISO C++ standard is ISO C++ 1998. Am I correct ?

No -- the current standard is dated 2003. Most of the differences
between the 1998 standard and the 2003 standard are fairly minor though
-- most simply change the wording to actually require what was intended
to start with. A few new requirements were added (e.g. std::vector must
use contiguous storage) but these were always true in practice anyway.
When is the next standard expected ? What is the URL for knowing it ?

The committee is trying for 2009. The committee home page is at:

http://www.open-std.org/jtc1/sc22/wg21/
Will it contain libraries for network programming as part of the
standard library ?.(just as data structures and algorithms were
included as part ISO C++ 1998 standard library).

I think it's quite unlikely. There isn't any network support in the
current draft, and the committee stopped accepting new features around a
year ago. In theory it could still happen, but only if it was forced to,
such as a country making it clear that they would only vote in favor of
the standard if this was added. Some such requirements have been known
for quite a while, and I doubt anybody would add such a thing at this
late date.
 
I

Ioannis Vranos

Jerry said:
I think it's quite unlikely. There isn't any network support in the
current draft, and the committee stopped accepting new features around a
year ago. In theory it could still happen, but only if it was forced to,
such as a country making it clear that they would only vote in favor of
the standard if this was added. Some such requirements have been known
for quite a while, and I doubt anybody would add such a thing at this
late date.


So what will be new in the current standard apart from some new algorithms?

Will some C99 junk abominations like built in _complex, long long, etc
make into the standard?
 
I

Ioannis Vranos

Corrected some text:



So what will be new in the current standard apart from some new algorithms?

Will some C99 junk like built in _complex, long long, etc make into the
standard?
 
V

Victor Bazarov

Ioannis said:
[..]
Will some C99 junk like built in _complex, long long, etc make into
the standard?

Are you *afraid* to get a copy and find out for yourself?

V
 
E

Erik Wikström

Corrected some text:




So what will be new in the current standard apart from some new algorithms?

Will some C99 junk like built in _complex, long long, etc make into the
standard?

C++ have std::comples so it does not need any other. What is wrong with
long long? For more information about what will be in the next standard
take a look at Wikipedia.
 
I

Ioannis Vranos

Erik said:
C++ have std::comples so it does not need any other. What is wrong with
long long? For more information about what will be in the next standard
take a look at Wikipedia.


Yes I know C++ has std::complex and focuses on providing abstraction
facilities which can be used to define what we need. However so far C++
has also been "a better C".

C99 took the way of providing "exotic" built-in types (with exotic names
I would say, like "_complex") ignoring the abstraction aims and ideals
of C++.

Regarding long long well, the known stuff are bothering me, the type
system rules are broken in code considering that long is the larger
built-in type. Also long long is too long to type. I think that the
existing C++03 built in integer types are sufficient and we do not need
long long.

But as far as I can understand, long long will be included in C++0x/1x,
mainly because of a sense for C compatibility. But I think we must
realise that C and C++ have no common future, so I think long long
should be dropped since we do not need that.
 
I

Ioannis Vranos

Ioannis said:
Interesting link on C++0x indeed:

http://en.wikipedia.org/wiki/C++0x


However I am a bit troubled from this:

"Standard C++ offers two kinds of string literals. The first kind,
contained within double quotes, produces a null-terminated array of type
const char. The second kind, defined as, L"", produces a null-terminated
array of type const wchar_t, where wchar_t is a wide-character. Neither
literal type offers support for Unicode-encoded string literals".


AFAIK, wchar_t supports the largest character set provided by the
system. So AFAIK wchar_t is Unicode on systems supporting Unicode. Am I
wrong somewhere?
 
V

Victor Bazarov

Ioannis said:
However I am a bit troubled from this:

"Standard C++ offers two kinds of string literals. The first kind,
contained within double quotes, produces a null-terminated array of
type const char. The second kind, defined as, L"", produces a
null-terminated array of type const wchar_t, where wchar_t is a
wide-character. Neither literal type offers support for
Unicode-encoded string literals".

AFAIK, wchar_t supports the largest character set provided by the
system. So AFAIK wchar_t is Unicode on systems supporting Unicode. Am
I wrong somewhere?

There are several _different_ Unicode mappings/encoding, not all
can be [conveniently] supported by 'wchar_t' or by 'char'. Please
see http://en.wikipedia.org/wiki/Unicode.

V
 
J

James Kanze

However I am a bit troubled from this:

"Standard C++ offers two kinds of string literals. The first kind,
contained within double quotes, produces a null-terminated array of type
const char. The second kind, defined as, L"", produces a null-terminated
array of type const wchar_t, where wchar_t is a wide-character. Neither
literal type offers support for Unicode-encoded string literals".
AFAIK, wchar_t supports the largest character set provided by the
system. So AFAIK wchar_t is Unicode on systems supporting Unicode. Am I
wrong somewhere?

An implementation is certainly allowed to use some form of
Unicode in wchar_t, and from a quality of implementation point
of view, it's what I would expect if the platform supports
Unicode. But the standard certainly doesn't require it---it
doesn't even require wchar_t to be larger than a char.

The next version of the C++ standard will require char16_t and
char32_t, using UTF-16 and UTF-32, respectively. And there will
be no less than 10 different types of string literals.
 
V

Victor Bazarov

James said:
[..]
The next version of the C++ standard will require char16_t and
char32_t, using UTF-16 and UTF-32, respectively. And there will
be no less than 10 different types of string literals.

Ten? I can only see four: regular ("blah"), UTF-16 (u"blah"), UTF-32
(U"blah"), and wide (L"blah"). What are the other six?

V
 
R

red floyd

Victor said:
James said:
[..]
The next version of the C++ standard will require char16_t and
char32_t, using UTF-16 and UTF-32, respectively. And there will
be no less than 10 different types of string literals.

Ten? I can only see four: regular ("blah"), UTF-16 (u"blah"), UTF-32
(U"blah"), and wide (L"blah"). What are the other six?

V

Raw (R"....")
UTF8 (u8"....")
And I think the Raw comes in the various UTF and wide flavors, as well,
but I'm not sure.
 
I

Ioannis Vranos

James said:
An implementation is certainly allowed to use some form of
Unicode in wchar_t, and from a quality of implementation point
of view, it's what I would expect if the platform supports
Unicode. But the standard certainly doesn't require it---it
doesn't even require wchar_t to be larger than a char.

OK, the standard also does not require that sizeof(int)> sizeof(char)
always, but this doesn't mean we have to introduce new "specific" types
because of this.


But again, it specifically mentions:

"Standard C++ offers two kinds of string literals. The first kind,
contained within double quotes, produces a null-terminated array of type
const char. The second kind, defined as, L"", produces a null-terminated
array of type const wchar_t, where wchar_t is a wide-character.

==> Neither literal type offers support for Unicode-encoded string
literals". Isn't this wrong?
 
J

Jerry Coffin

[email protected] says... said:
So what will be new in the current standard apart from some new algorithms?

In the preprocessor, most (all?) of the C99 additions are included.

In the language proper:
1) concepts and concept maps
2) variadic templates
3) new character/string types (e.g. UTF-8 strings)
4) new string literals (e.g. raw literals, Unicode literals)
5) auto and decltype
6) rvalue references

There's definitely more than this, but these are the ones that stick out
in my memory as meaning a lot. I won't try to describe any of these here
-- almost any one of them deserves a lot more than a single post.

In the library, the stuff originally in TR1 is now part of the standard.
This includes things like regular expressions, more smart pointers (e.g.
shared_pointer), unordered associative containers (normally hash-based),
function binders, random number generators, etc. Though I'm not sure the
definition is finished yet, threads and atomic operations are supposed
to be supported by the time all is said and done.
Will some C99 junk abominations like built in _complex, long long, etc
make into the standard?

long long is added, though I'm not sure what's particularly abominable
about it. C++ already had std::complex, so I don't think anybody cared
much about the C99 additions in that area.
 
J

James Kanze

James said:
[..]
The next version of the C++ standard will require char16_t and
char32_t, using UTF-16 and UTF-32, respectively. And there will
be no less than 10 different types of string literals.
Ten? I can only see four: regular ("blah"), UTF-16 (u"blah"),
UTF-32 (U"blah"), and wide (L"blah"). What are the other six?

I though I posted a table recently:

encoding '\' recognized
"blah" native yes
u8"blah" UTF-8 yes
u"blah" UTF-16 yes
U"blah" UTF-32 yes
L"blah" native yes
R"blah" native no
u8R"blah" UTF-8 no
uR"blah" UTF-16 no
UR"blah" UTF-32 no
LR"blah" native no

There are four different character types, but you have string
literals either in the native encoding or in UTF-8 for char. In
addition, you can have string literals in which things like "\n"
represent a backslash followed by an n, rather than a new line.
Useful, for example, for specifying regular expressions, where
the expression might contain a large number of \. (The 'R' here
means "raw".)
 
J

James Kanze

James Kanze wrote:
OK, the standard also does not require that sizeof(int)>
sizeof(char) always, but this doesn't mean we have to
introduce new "specific" types because of this.

I'm not sure what your point is. Even if int's and char's have
the same size, they're different types.

C handled wchar_t as a typedef. This doesn't work in C++
because of overload resolution: you want wchar_t to behave as a
character type, when outputting it, for example; if it were a
typedef for int, you have a problem.
But again, it specifically mentions:
"Standard C++ offers two kinds of string literals. The first kind,
contained within double quotes, produces a null-terminated array of type
const char. The second kind, defined as, L"", produces a null-terminated
array of type const wchar_t, where wchar_t is a wide-character.
==> Neither literal type offers support for Unicode-encoded string
literals". Isn't this wrong?

What are you disagreeing with? Neither literal type is Unicode,
unless an implementation decides to make it Unicode. Most of
the ones I have access to don't.
 
I

Ioannis Vranos

James said:
I'm not sure what your point is. Even if int's and char's have
the same size, they're different types.

C handled wchar_t as a typedef. This doesn't work in C++
because of overload resolution: you want wchar_t to behave as a
character type, when outputting it, for example; if it were a
typedef for int, you have a problem.

I mean long long is merely introduced because C committee decided to
introduce it to C99, no other real reason. What will happen if they
decide in the future to add another such built-in type?

What are you disagreeing with? Neither literal type is Unicode,
unless an implementation decides to make it Unicode. Most of
the ones I have access to don't.


Those implementations you are mentioning are compiling programs for OSes
that do provide Unicode?

Under Windows I suppose current VC++ implements wchar_t as Unicode, and
in my OS (Linux) I suppose wchar_t is Unicode (haven't verified the last
though).


So with these new character types will we get Unicodes under OSes that
do not support Unicode? With the introduction of these new types, what
will be the use of wchar_t?

Essentially I am talking about restricting the introduction of new
features in the new standard, only to the most essential ones. I have
the feeling that all these Unicodes will be messy. Why are all these
Unicode types needed? After a new version of Unicode, we will have it
introduced as a new built-in type in C++ standard? What will be the use
of the old ones? What I am saying is that we will be having an
continuous accumulation of older built-in character types.

We are repeating C's mistakes here, adding built in types instead of
providing them as libraries.
 
B

Bo Persson

Ioannis said:
Those implementations you are mentioning are compiling programs for
OSes that do provide Unicode?

Under Windows I suppose current VC++ implements wchar_t as Unicode,
and in my OS (Linux) I suppose wchar_t is Unicode (haven't verified
the last though).

But they might be different variants of Unicode, like UTF-16 and
UTF-32.
So with these new character types will we get Unicodes under OSes
that do not support Unicode?

Yes. Perhaps we will not use them much there?
With the introduction of these new
types, what will be the use of wchar_t?

Backward compatibility?

Often it will behave the same as either char16_t or char32_t. We just
don't know, portably.
Essentially I am talking about restricting the introduction of new
features in the new standard, only to the most essential ones. I
have the feeling that all these Unicodes will be messy. Why are all
these Unicode types needed? After a new version of Unicode, we will
have it introduced as a new built-in type in C++ standard? What
will be the use of the old ones? What I am saying is that we will
be having an continuous accumulation of older built-in character
types.
We are repeating C's mistakes here, adding built in types instead of
providing them as libraries.

No one has come up with a good way to introduce string literals as a
library only solution. The compiler has to know the types, to do the
proper encoding.


Bo Persson
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Staff online

Members online

Forum statistics

Threads
473,992
Messages
2,570,220
Members
46,805
Latest member
ClydeHeld1

Latest Threads

Top