Date: | 2023-03-24 |
Project: | Programming Language C++ |
Reference: | ISO/IEC IS 14882:2020 |
Reply to: | Jens Maurer |
jens.maurer@gmx.net |
This document contains the C++ core language issues for which the Committee (J16 + WG21) has decided that no action is required, that is, issues with status "NAD" ("Not A Defect"), "dup" (duplicate), "concepts," and "extension."
This document is part of a group of related documents that together describe the issues that have been raised regarding the C++ Standard. The other documents in the group are:
For more information, including a description of the meaning of the issue status codes and instructions on reporting new issues, please see the Active Issues List.
Section references in this document reflect the section numbering of document WG21 N4944.
The intent appears to be that the following example is well-formed, even though D::f(int) hides B2::f():
struct B1 { void f(); }; struct B2 { void f(); }; struct[[base_check]] D: B1, B2 { using B1::f; void f(int); };
However, this is not reflected in the current wording.
Rationale (November, 2010):
The consensus of the CWG was that the using-declaration does, indeed, hide B2::f() and thus D should be ill-formed.
_N3225_.D.2 [depr.static] says that declaring namespace-scope objects as static is deprecated. Declaring namespace-scope functions as static should also be deprecated.
Proposed resolution (10/99): In both 9.8.2.2 [namespace.unnamed] paragraph 2 and _N3225_.D.2 [depr.static] paragraph 1, replace
when declaring objects in a namespace scopewith
when declaring entities in a namespace scopeIn addition, there are a number of locations in the Standard where use of or reference to static should be reconsidered. These include:
Rationale (04/00):
This issue, along with issue 174, has been subsumed by issue 223. Until the committee determines the meaning of deprecation, it does not make sense either to extend or reduce the number of features to which it is applied.
The decision to deprecate global static should be reversed.
Rationale (04/00):
This issue, along with issue 167, has been subsumed by issue 223. Until the committee determines the meaning of deprecation, it does not make sense either to extend or reduce the number of features to which it is applied.
Inheriting constructors should not be part of C++0x unless they have implementation experience.
Rationale (March, 2011):
The full Committee voted not to remove this feature.
[Addressed with a different approach by paper P0846R0, adopted at the November, 2017 meeting.]
Consider the following:
namespace N {
struct A { };
template<typename T>
T func(const A&) { return T(); }
}
void f() {
N::A a;
func<int>(a); // error
}
Although argument-dependent lookup would allow N::func to be found in this call, the < is taken as a less-than operator rather than as the beginning of a template argument list. If the use of the template keyword for syntactic disambiguation were permitted for unqualified-ids, this problem could be solved by prefixing the function name with template, allowing the template-id to be parsed and argument-dependent lookup to be performed.
Rationale (July, 2009):
This suggestion would need a full proposal and discussion by the EWG before the CWG could consider it.
Bullet 13.3 of _N4567_.5.1.1 [expr.prim.general] permits only non-static data members to appear without an object expression in an unevaluated operand. There does not appear to be a good reason to exclude non-static member functions from this permission.
Rationale (October, 2015):
Without knowing the type of this, overload resolution cannot be performed, and it seems not worth the trouble to allow member functions only in the case where there is no overloading.
When a function throws an exception that is not in its exception-specification, std::unexpected() is called. According to _N4606_.15.5.2 [except.unexpected] paragraph 2,
If [std::unexpected()] throws or rethrows an exception that the exception-specification does not allow then the following happens: If the exception-specification does not include the class std::bad_exception (17.9.4 [bad.exception]) then the function std::terminate() is called, otherwise the thrown exception is replaced by an implementation-defined object of the type std::bad_exception, and the search for another handler will continue at the call of the function whose exception-specification was violated.
The “replaced by” wording is imprecise and undefined. For example, does this mean that the destructor is called for the existing exception object, or is it simply abandoned? Is the replacement in situ, so that a pointer to the existing exception object will now point to the std::bad_exception object?
Mike Miller: The call to std::unexpected() is not described as analogous to invoking a handler, but if it were, that would resolve this question; it is clearly specified what happens to the previous exception object when a new exception is thrown from a handler (14.2 [except.throw] paragraph 4).
This approach would also clarify other questions that have been raised regarding the requirements for stack unwinding. For example, 14.6.2 [except.terminate] paragraph 2 says that
In the situation where no matching handler is found, it is implementation-defined whether or not the stack is unwound before std::terminate() is called.
This requirement could be viewed as in conflict with the statement in _N4606_.15.5.2 [except.unexpected] paragraph 1 that
If a function with an exception-specification throws an exception that is not listed in the exception-specification, the function std::unexpected() is called (_N4606_.D.6 [exception.unexpected]) immediately after completing the stack unwinding for the former function.
If it is implementation-defined whether stack unwinding occurs before calling std::terminate() and std::unexpected() is called only after doing stack unwinding, does that mean that it is implementation-defined whether std::unexpected() is called if there is ultimately no handler found?
Again, if invoking std::unexpected() were viewed as essentially invoking a handler, the answer to this would be clear, because unwinding occurs before invoking a handler.
Rationale (February, 2017):
The issue is moot after the adoption of document P0003.
With the changes for issue 2256, extending destruction to apply to objects of scalar type, should invoking a pseudo-destructor end the lifetime of that object?
Rationale (February, 2019):
This question is resolved by paper P0593R4.
Paragraph 7 of _N4868_.6.5.6 [basic.lookup.classref] says,
If the id-expression is a conversion-function-id, its conversion-type-id shall denote the same type in both the context in which the entire postfix-expression occurs and in the context of the class of the object expression (or the class pointed to by the pointer expression).Does this mean that the following example is ill-formed?
struct A { operator int(); } a; void foo() { typedef int T; a.operator T(); // 1) error T is not found in the context // of the class of the object expression? }The second bullet in paragraph 1 of 6.5.5.2 [class.qual] says,
a conversion-type-id of an operator-function-id is looked up both in the scope of the class and in the context in which the entire postfix-expression occurs and shall refer to the same type in both contextsHow about:
struct A { typedef int T; operator T(); }; struct B : A { operator T(); } b; void foo() { b.A::operator T(); // 2) error T is not found in the context // of the postfix-expression? }Is this interpretation correct? Or was the intent for this to be an error only if T was found in both scopes and referred to different entities?
If the intent was for these to be errors, how do these rules apply to template arguments?
template <class T1> struct A { operator T1(); } template <class T2> struct B : A<T2> { operator T2(); void foo() { T2 a = A<T2>::operator T2(); // 3) error? when instantiated T2 is not // found in the scope of the class T2 b = ((A<T2>*)this)->operator T2(); // 4) error when instantiated? } }
(Note bullets 2 and 3 in paragraph 1 of 6.5.5.2 [class.qual] refer to postfix-expression. It would be better to use qualified-id in both cases.)
Erwin Unruh: The intent was that you look in both contexts. If you find it only once, that's the symbol. If you find it in both, both symbols must be "the same" in some respect. (If you don't find it, its an error).
Mike Miller: What's not clear to me in these examples is whether what is being looked up is T or int. Clearly the T has to be looked up somehow, but the "name" of a conversion function clearly involves the base (non-typedefed) type, not typedefs that might be used in a definition or reference (cf 6.1 [basic.pre] paragraph 7 and 11.4.8 [class.conv] paragraph 5) . (This is true even for types that must be written using typedefs because of the limited syntax in conversion-type-ids — e.g., the "name" of the conversion function in the following example
typedef void (*pf)(); struct S { operator pf(); };is S::operator void(*)(), even though you can't write its name directly.)
My guess is that this means that in each scope you look up the type named in the reference and form the canonical operator name; if the name used in the reference isn't found in one or the other scope, the canonical name constructed from the other scope is used. These names must be identical, and the conversion-type-id in the canonical operator name must not denote different types in the two scopes (i.e., the type might not be found in one or the other scope, but if it's found in both, they must be the same type).
I think this is all very vague in the current wording.
Rationale (February, 2021):
This issue was resolved by the resolution of issue 1111.
A change was introduced into the language that made names first declared in friend declarations "invisible" to normal lookups until such time that the identifier was declared using a non-friend declaration. This is described in _N4868_.9.8.2.3 [namespace.memdef] paragraph 3 and 11.8.4 [class.friend] paragraph 9 (and perhaps other places).
The standard gives examples of how this all works with friend declarations, but there are some cases with nonfriend elaborated type specifiers for which there are no examples, and which might yield surprising results.
The problem is that an elaborated type specifier is sometimes a declaration and sometimes a reference. The meaning of the following code changes depending on whether or not friend class names are injected (visibly) into the enclosing namespace scope.
struct A; struct B; namespace N { class X { friend struct A; friend struct B; }; struct A *p; // N::A with friend injection, ::A without struct B; // always N::B }Is this the desired behavior, or should all elaborated type specifiers (and not just those of the form "class-key identifier;") have the effect of finding previously declared "invisible" names and making them visible?
Mike Miller: That's not how I would categorize the effect of "struct B;". That declaration introduces the name "B" into namespace N in exactly the same fashion as if the friend declaration did not exist. The preceding friend declaration simply stated that, if a class N::B were ever defined, it would have friendly access to the members of N::X. In other words, the lookups in both "struct A*..." and "struct B;" ignore the friend declarations.
(The standard is schizophrenic on the issue of whether such friend declarations introduce names into the enclosing namespace. 6.4 [basic.scope] paragraph 4 says,
John Spicer: The previous declaration of B is not completely ignored though, because certainly changing "friend struct B;" to "friend union B;" would result in an error when B was later redeclared as a struct, wouldn't it?
Bill Gibbons: Right. I think the intent was to model this after the existing rule for local declarations of functions (which dates back to C), where the declaration is introduced into the enclosing scope but the name is not. Getting this right requires being somewhat more rigorous about things like the ODR because there may be declaration clashes even when there are no name clashes. I suspect that the standard gets this right in most places but I would expect there to be a few that are still wrong, in addition to the one Mike pointed out.
Mike Miller: Regarding would result in an error when B was later redeclared
I don't see any reason why it should. The restriction that the class-key must agree is found in 9.2.9.4 [dcl.type.elab] and is predicated on having found a matching declaration in a lookup according to 6.5.6 [basic.lookup.elab] . Since a lookup of a name declared only (up to that point) in a friend declaration does not find that name (regardless of whether you subscribe to the "does-not-introduce" or "introduces-invisibly" school of thought), there can't possibly be a mismatch.
I don't think that the Standard's necessarily broken here. There is no requirement that a class declared in a friend declaration ever be defined. Explicitly putting an incompatible declaration into the namespace where that friend class would have been defined is, to me, just making it impossible to define — which is no problem, since it didn't have to be defined anyway. The only error would occur if the same-named but unbefriended class attempted to use the nonexisting grant of friendship, which would result in an access violation.
(BTW, I couldn't find anything in the Standard that forbids defining a class with a mismatched class-key, only using one in an elaborated-type-specifier. Is this a hole that needs to be filled?)
John Spicer: This is what 9.2.9.4 [dcl.type.elab] paragraph 3 says:
class B; union B {};and
union B {}; class B;are both invalid. I think this paragraph is intended to say that. I'm not so sure it actually does say that, though.
Mike Miller: Regarding I think the intent was to model this after the existing rule for local declarations of functions (which dates back to C)
Actually, that's not the C (1989) rule. To quote the Rationale from X3.159-1989:
Regarding Getting this right requires being somewhat more rigorous
Yes, I think if this is to be made illegal, it would have to be done with the ODR; the name-lookup-based current rules clearly (IMHO) don't apply. (Although to be fair, the [non-normative] note in 6.4 [basic.scope] paragraph 4 sounds as if it expects friend invisible injection to trigger the multiple-declaration provisions of that paragraph; it's just that there's no normative text implementing that expectation.)
Bill Gibbons: Nor does the ODR currently disallow:
translation unit #1 struct A; translation unit #2 union A;since it only refers to class definitions, not declarations.
But the obvious form of the missing rule (all declarations of a class within a program must have compatible struct/class/union keys) would also answer the original question.
The declarations need not be visible. For example:
translation unit #1 int f() { return 0; } translation unit #2: void g() { extern long f(); }is ill-formed even though the second "f" is not a visible declaration.
Rationale (10/99): The main issue (differing behavior of standalone and embedded elaborated-type-specifiers) is as the Committee intended. The remaining questions mentioned in the discussion may be addressed in dealing with related issues.
(See also issues 136, 138, 139, 143, 165, and 166.)
_N4868_.9.8.2.3 [namespace.memdef] paragraph 2 says,
Members of a named namespace can also be defined outside that namespace by explicit qualification (6.5.5.3 [namespace.qual] ) of the name being defined, provided that the entity being defined was already declared in the namespace...It is not clear whether block-scope extern declarations and friend declarations are sufficient to permit the named entities to be defined outside their namespace. For example,
namespace NS { struct A { friend struct B; }; void foo() { extern void bar(); } } struct NS::B { }; // 1) legal? void NS::bar() { } // 2) legal?
Rationale (10/99): Entities whose names are "invisibly injected" into a namespace as a result of friend declarations are not "declared" in that namespace until an explicit declaration of the entity appears at namespace scope. Consequently, the definitions in the example are ill-formed.
(See also issues 95, 136, 138, 139, 143, and 166.)
Consider the following example:
class C { public: enum E {}; friend void* operator new(size_t, E); friend void operator delete(void*, E); }; void foo() { C::E e; C* ptr = new(e) C(); }
This code, which is valid in global scope, becomes ill-formed when the class definition is moved into a namespace, and there is no way to make it valid:
namespace N { class C { public: enum E {}; friend void* operator new(size_t, E); friend void operator delete(void*, E); }; } void foo() { N::C::E e; N::C* ptr = new(e) N::C(); }
The reason for this is that non-member allocation and deallocation functions are required to be members of the global scope (6.7.5.5.2 [basic.stc.dynamic.allocation] paragraph 1, 6.7.5.5.3 [basic.stc.dynamic.deallocation] paragraph 1) , unqualified friend declarations declare names in the innermost enclosing namespace (_N4868_.9.8.2.3 [namespace.memdef] paragraph 3) , and these functions cannot be declared in global scope at a point where the friend declarations could refer to them using qualified-ids because their second parameter is a member of the class and thus can't be named before the class containing the friend declarations is defined.
Possible solutions for this conundrum include invention of some mechanism to allow a friend declaration to designate a namespace scope other than the innermost enclosing namespace in which the friend class or function is to be declared or to relax the innermost enclosing namespace lookup restriction in _N4868_.9.8.2.3 [namespace.memdef] paragraph 3 for friend declarations that nominate allocation and deallocation functions.
Rationale (April, 2006):
The CWG acknowledged that it is not always possible to move code from the global scope into a namespace but felt that this problem was not severe enough to warrant changing the language to accommodate it. Possible solutions include moving the enumeration outside the class or defining member allocation and deallocation functions.
_N4868_.9.8.2.3 [namespace.memdef] paragraph 3 is intended to prevent injection of names from friend declarations into the containing namespace scope:
If a friend declaration in a non-local class first declares a class or function the friend class or function is a member of the innermost enclosing namespace. The name of the friend is not found by unqualified lookup (6.5.3 [basic.lookup.unqual]) or by qualified lookup (6.5.5 [basic.lookup.qual]) until a matching declaration is provided in that namespace scope (either before or after the class definition granting friendship).
However, this does not address names declared by elaborated-type-specifiers that are part of the friend declaration. Are these names intended to be visibly injected? For example, is the following well-formed?
class A { friend class B* f(); }; B* bp; // Is B visible here?
Implementations differ in their treatment of this example: EDG and MSVC++ 8.0 accept it, while g++ 4.1.1 rejects it.
Rationale (July, 2009):
The current specification does not restrict injection of names in elaborated-type-specifiers, and the consensus of the CWG was that no change is needed on this point.
The current wording of _N4868_.9.8.2.3 [namespace.memdef] and 13.9.4 [temp.expl.spec] requires that an explicit specialization be declared either in the same namespace as the template or in an enclosing namespace. It would be convenient to relax that requirement and allow the specialization to be declared in a non-enclosing namespace to which one or more if the template arguments belongs.
Additional note, April, 2015:
See EWG issue 48.
EWG 2022-11-11
This is a feature request, not a defect.
The standard is inconsistent in its use of a hyphen on the following: nontype vs. non-type, non-dependent vs. nondependent, non-deduced vs. nondeduced, and non-template vs. nontemplate. We should pick a preferred form.
Notes from the March 2004 meeting:
If this isn't a purely editorial issue, nothing is. We're referring this to the editor. We prefer the hyphenated forms.
(From item JP 03 of the Japanese National Body comments on the C++14 DIS ballot.)
A digit separator is allowed immediately following the prefix for an octal literal but not for a binary or hexadecimal literal. For example, 0'01 is permitted but 0b'01 and 0x'01 are not. This asymmetry makes tools such as automatic code generators more complicated than necessary. The digit separator should be consistently allowed or disallowed immediately following the prefix in all non-decimal integer literals.
Rationale (November, 2014):
CWG felt that the reported asymmetry is not a major difficulty and that it is more natural to think of the leading 0 in an octal literal as part of the numeric value rather than as a separate prefix, as it is with 0b and 0x. Consequently there was no consensus for a change to the existing specification.
5.13.8 [lex.ext] paragraphs 3-4 state in notes that the arguments to a literal operator template “can only contain characters from the basic source character set.” This restriction does not appear to occur anywhere in normative text, however.
Rationale (July, 2009):
The characters in the template arguments are the characters comprising n, the integer literal, or f, the floating literal. As such, they are constrained by the grammar to be members of the basic character set, and no further normative restriction is needed.
User-defined literals should not be part of C++0x unless they have implementation experience.
Rationale (March, 2011):
The feature has been implemented.
The format macros that are part of <inttypes.h> (incorporated into C++11 as <cinttypes>) are conventionally written with no whitespace separating them from the rest of the format string, e.g.,
printf("foo = "PRIu32", bar = "PRIi8"\n", foo, bar); printf("baz = "PRIu32"\n", baz);
This usage conflicts with user-defined literals.
Rationale (October, 2012):
CWG felt that whether this form of these macros needed to be supported in C++ should be examined by EWG.
Rationale (February, 2014):
EWG determined that no action should be taken on this issue.
A ud-suffix is defined in 5.13.8 [lex.ext] as an identifier. This prevents plausible user-defined literals for currency symbols, which are not categorized as identifier characters.
Rationale (June, 2014):
CWG felt that a decision on whether to allow this capability or not should be considered by EWG.
EWG 2022-11-11
This is a request for a new feature, which should be proposed in a paper to EWG. SG16 recommended not adding the feature.
According to the grammar in 5.13.8 [lex.ext], a ud-suffix is an identifier. However, implementations seem to agree that "x"or"y" is equivalent to "xy"or and not to true. Should the Standard permit identifier-like alternative tokens as ud-suffixes?
Rationale (October, 2015):
The identifier in a ud-suffix is required to begin with an underscore, and the identifier-like alternative tokens do not satisfy this requirement.
According to 6.1 [basic.pre] paragraph 4,
A name is a use of an identifier (5.10 [lex.name]), operator-function-id (12.4 [over.oper]), literal-operator-id (12.6 [over.literal]), conversion-function-id (11.4.8.3 [class.conv.fct]), or template-id (13.3 [temp.names]) that denotes an entity or label (8.7.6 [stmt.goto], 8.2 [stmt.label]).
Since typedefs are neither entities nor labels, it appears that a typedef-name is not a name.
There is an additional discrepancy regarding alias templates. According to 6.1 [basic.pre] paragraph 3, templates (including, presumably, alias templates) and their specializations are entities. However, the note in 13.3 [temp.names] paragraph 6 says,
[Note: A simple-template-id that names a class template specialization is a class-name (11.3 [class.name]). Any other simple-template-id that names a type is a typedef-name. —end note]
Thus an alias template specialization both is and is not an entity.
In 6.3 [basic.def.odr] paragraph 4 bullet 4, it's presumably the case that a conversion to T* requires that T be complete only if the conversion is from a different type. One could argue that there is no conversion (and therefore the text is accurate as it stands) if a cast does not change the type of the expression, but it's probably better to be more explicit here.
On the other hand, this text is non-normative (it's in a note).
Rationale (04/99): The relevant normative text makes this clear. Implicit conversion and static_cast are defined (in 7.3 [conv] and 7.6.1.9 [expr.static.cast] , respectively) as equivalent to declaration with initialization, which permits pointers to incomplete types, and dynamic_cast (7.6.1.7 [expr.dynamic.cast] ) explicitly prohibits pointers to incomplete types.
decltype applied to a function call expression requires a complete type (7.6.1.3 [expr.call] paragraph 3 and 6.3 [basic.def.odr] paragraph 4), even though decltype's result might be used in a way that does not actually require a complete type. This might cause undesired and excessive template instantiations. Immediately applying decltype should not require a complete type, for example, for the return type of a function call.
Additional note (October, 2010):
Another potential consideration in this question is the use of the return type in template argument deduction. If the return type is a specialization of a class template, one would want an error occurring in the instantiation of that specialization to cause a deduction failure, which would argue in favor of requiring the type to be complete. (However, that might also be covered by “when the completeness of the class type affects the semantics of the program” in 13.9.2 [temp.inst] paragraph 1.)
Rationale (November, 2010):
The CWG was persuaded by the SFINAE consideration.
Note:
This issue was raised again at the March, 2011 meeting and paper N3276, implementing this recommendation, was adopted for the FDIS.
The relationship between when an expression is potentially evaluated, especially with respect to contexts requiring constant expressions, and non-type template arguments is not clear and should be clarified. In particular, it seems that these contexts should be potentially-evaluated.
See also issue 1378.
Additional note, January, 2012:
Further discussion indicates that this is not a defect and should be closed as such.
Notes from the February, 2012 meeting:
CWG determined that the current wording is clear enough that an instantiation is required whenever it affects the semantics of the program.
According to 6.3 [basic.def.odr] paragraph 3,
this is odr-used if it appears as a potentially-evaluated expression (including as the result of the implicit transformation in the body of a non-static member function (11.4.3 [class.mfct.non.static])).
This wording does not distinguish between constant and non-constant expressions in determining whether this is odr-used or not.
Notes from the April, 2018 teleconference:
Specification of the odr-use of this was done to allow determination of whether this should be captured by a lambda. Recent changes to determine capture syntactically, rather than by odr-use, have rendered this issue almost moot. However, 6.3 [basic.def.odr] still describes when this is odr-used; this specification is no longer necessary and should be removed.
Rationale (February, 2019):
This specification is now used by contracts.
According to 6.3 [basic.def.odr] bullet 6.5,
in each definition of D, a default argument used by an (implicit or explicit) function call is treated as if its token sequence were present in the definition of D; that is, the default argument is subject to the requirements described in this paragraph (and, if the default argument has subexpressions with default arguments, this requirement applies recursively)
However, this rule is insufficient to handle a case like:
struct A { template<typename T> A(T); }; void f(A a = []{}); inline void g() { f(); }
This should be an ODR violation, because the call to f() will invoke a different specialization of the constructor template in each translation unit, but it is not, because the rule says this example is equivalent to:
inline void g() { f([]{}); }
which is not an ODR violation, since the type of the closure object will be the same in every translation unit (9.2.8 [dcl.inline] paragraph 6) ..
Notes from the October, 2018 teleconference:
This will be addressed by work already underway to rework the relationship between lambdas and the ODR.Rationale (February, 2021):
The resolution of issue 2300 makes clear that this example is an ODR violation.
This seems like it should be well-formed:
template <class T> T list(T x); template <class H, class ...T> auto list(H h, T ...args) -> decltype(list(args...)); auto list3 = list(1, 2, 3);
but it isn't, because the second list isn't in scope in its own trailing-return-type; the point of declaration is after the declarator, which includes the trailing-return-type. And since int has no associated namespaces, the call in the return type only sees the first list. G++, EDG and Clang all reject the testcase on this basis.
But this seems like a natural pattern for writing variadic function templates, and we could support it by moving the point of declaration to the ->. This would mean having to deal with a function that only has a placeholder for a return type, but I think we can handle that.
Rationale (February, 2012):
This is a request for an extension to the language and is thus more appropriately addressed by EWG.
EWG 2022-11-11
This is a breaking change whose benefits and trade-offs need to be carefully analyzed.
Consider this code:
struct Base { enum { a, b, c, next }; }; struct Derived : public Base { enum { d = Base::next, e, f, next }; };The idea is that the enumerator "next" in each class is the next available value for enumerators in further derived classes.
If we had written
enum { d = next, e, f, next };I think we would run afoul of 6.4.7 [basic.scope.class] :
A name N used in a class S shall refer to the same declaration in its context and when re-evaluated in the completed scope of S. No diagnostic is required for a violation of this rule.But in the original code, we don't have an unqualified "next" that refers to anything but the current scope. I think the intent was to allow the code, but I don't find the wording clear on on that point.
Is there another section that makes it clear whether the original code is valid? Or am I being obtuse? Or should the quoted section say "An unqualified name N used in a class ..."?
Rationale (04/99): It is sufficiently clear that "name" includes qualified names and hence the usual lookup rules make this legal.
Consider:
struct A { struct B { typedef int X; }; }; template<class B> struct C : A { B::X q; // Ok: A::B. struct U { typedef int X; }; template<class U> struct D; }; template<class B> template<class U> struct C<B>::D { typename U::X r; // which U? }; C<int>::D<double> y;
In the definition of D, U definitely needs to be in scope as soon as it's declared because it might have been used in subsequent template parameter declarations, or it might have been used in the id-expression that names the declared entity — just as B is used in C<B>::D. (So 6.4.9 [basic.scope.temp] does the right thing for that purpose.)
But it would be nice if the result of lookup did not depend on whether D's body appears lexically inside C's body; currently, we don't seem to have the wording that makes it so.
Rationale (October, 2012):
This example is covered by the wording in 13.8.2 [temp.local] paragraphs 7-8: the template parameter is found.
The name lookup in a base-specifier and a mem-initializer differ in that the former ignores non-type names but the latter does not. When the mem-initializer-id is qualified, this can lead to surprising results:
struct file_stat : ::stat { // the class file_stat() : ::stat{} {} // the function };
Rationale (May, 2015):
The use of a qualified-id as a mem-initializer-id is sufficiently unusual that it is not worth changing the lookup rules to accommodate it.
The description of name lookup in the parameter-declaration-clause of member functions in 6.5.3 [basic.lookup.unqual] paragraphs 7-8 is flawed in at least two regards.
First, both paragraphs 7 and 8 apply to the parameter-declaration-clause of a member function definition and give different rules for the lookup. Paragraph 7 applies to names "used in the definition of a class X outside of a member function body...," which includes the parameter-declaration-clause of a member function definition, while paragraph 8 applies to names following the function's declarator-id (see the proposed resolution of issue 41), including the parameter-declaration-clause.
Second, paragraph 8 appears to apply to the type names used in the parameter-declaration-clause of a member function defined inside the class definition. That is, it appears to allow the following code, which was not the intent of the Committee:
struct S { void f(I i) { } typedef int I; };
Additional note, January, 2012:
brace-or-equal-initializers for non-static data members are intended effectively as syntactic sugar for mem-initializers in constructor definitions; the lookup should be the same.
Rationale (February, 2021):
This issue was resolved by the resolution of issue 1352.
The wording of 6.5.3 [basic.lookup.unqual] paragraph 2 is misleading. It says:
The declarations from the namespace nominated by a using-directive become visible in a namespace enclosing the using-directive; see 9.8.4 [namespace.udir].
According to 9.8.4 [namespace.udir] paragraph 1, that namespace is
the nearest enclosing namespace which contains both the using-directive and the nominated namespace.
That would seem to imply the following:
namespace outer { namespace inner { int i; } void f() { using namespace inner; } int j = i; // inner::i is "visible" in namespace outer }
Suggested resolution: Change the first sentence of 6.5.3 [basic.lookup.unqual] paragraph 2 to read:
The declarations from the namespace nominated by a using-directive become visible in the scope in which the using-directive appears after the using-directive.
Notes from the 4/02 meeting:
After a lot of discussion of possible wording changes, we decided the wording should be left alone. 6.5.3 [basic.lookup.unqual] paragraph 2 is not intended to be a full specification; that's in 9.8.4 [namespace.udir] paragraph 1. See also 6.4.6 [basic.scope.namespace] paragraph 1.
According to 6.5.3 [basic.lookup.unqual] paragraph 10,
In a friend declaration naming a member function, a name used in the function declarator and not part of a template-argument in the declarator-id is first looked up in the scope of the member function's class (6.5.2 [class.member.lookup]). If it is not found, or if the name is part of a template-argument in the declarator-id, the look up is as described for unqualified names in the definition of the class granting friendship.
The corresponding specification for non-friend declarations in paragraph 8 applies the class-scope lookup only to names that follow the declarator-id. The same should be true in friend declarations.
Proposed resolution (February, 2018):
Change 6.5.3 [basic.lookup.unqual] paragraph 8 as follows:
For the members of a class X, a name used in a member function body, in a default argument, in a noexcept-specifier, in the brace-or-equal-initializer of a non-static data member (11.4 [class.mem]), or in the
definitiondeclaration of a class member outside of the definition of X,following the member's declarator-id32, shall be declared in one of the following ways:
before its use in the block in which it is used or in an enclosing block (8.4 [stmt.block]) within the body of the member function, or
shall beas a member of class X orbeas a member of a base class of X (6.5.2 [class.member.lookup]), orif X is a nested class of class Y (11.4.12 [class.nest]), shall be a member of Y, or shall be a member of a base class of Y (this lookup applies in turn to Y's enclosing classes, starting with the innermost enclosing class),33 or
if X is a local class (11.6 [class.local]) or is a nested class of a local class, before the definition of class X in a block enclosing the definition of class X, or
if X is a member of namespace N, or is a nested class of a class that is a member of N, or is a local class or a nested class within a local class of a function that is a member of N, before the use of the name, in namespace N or in one of N's enclosing namespaces
., orfor a friend declaration in a class Y, in a scope that would be searched for a name appearing within Y.
Delete 6.5.3 [basic.lookup.unqual] paragraph 10 and combine its example with that of paragraph 8:
In a friend declaration naming a member function, a name used in the function declarator and not part of a template-argument in the declarator-id is first looked up in the scope of the member function's class (6.5.2 [class.member.lookup]). If it is not found, or if the name is part of a template-argument in the declarator-id, the look up is as described for unqualified names in the definition of the class granting friendship. [Example:struct A { typedef int AT; void f1(AT); void f2(float); template <class T> void f3(); }; struct B { typedef char AT; typedef float BT; friend void A::f1(AT); // parameter type is A::AT friend void A::f2(BT); // parameter type is B::BT friend void A::f3<AT>(); // template argument is B::AT };
—end example]
Notes from the February, 2018 teleconference:
There was some concern as to whether the added lookup for friend function declarations placed the additional lookups in the correct sequence relative to the existing lookups and whether the new specification reflects any existing practice.
Rationale (March, 2018):
After further discussion, CWG determined that the semantics described in the existing wording were the most appropriate out of the alternatives considered.
Consider an example like the following:
template <typename T> void doit(const T& t, const T& t2) { } template <typename T> struct Container { auto doit(Container<T> &rhs) noexcept(noexcept(doit(T{}, T{}))) -> decltype(doit(T{}, T{})); }; Container<int> c;
This would appear to be ill-formed because the exception specification is a delayed-parse region, where the lookup is in the context of the completed class, while the lookup in the decltype in the return type is done immediately. The latter should find the two-parameter version of doit, as expected, while the former finds the member, one-parameter version. Current implementations accept the code, however, and it seems unfortunate that the meaning would be different in the two contexts.
Rationale, June, 2018:
The example is ill-formed: the reference to doit in the return type would refer to the member function in the completed class, which is ill-formed, no diagnostic required, per 6.4.7 [basic.scope.class] paragraph 2.
When a union is used in argument-dependent lookup, the union's type is not an associated class type. Consequently, code like this will fail to work.
union U { friend void f(U); }; int main() { U u; f(u); // error: no matching f — U is not an associated class }Is this an error in the description of unions in argument-dependent lookup?
Also, this section is written as if unions were distinct from classes. So adding unions to the "associated classes" requires either rewriting the section so that "associated classes" can include unions, or changing the term to be more inclusive, e.g. "associated classes and unions" or "associated types".
Jason Merrill: Perhaps in both cases, the standard text was intended to only apply to anonymous unions.
Liam Fitzpatrick: One cannot create expressions of an anonymous union type.
Rationale (04/99): Unions are class types, so the example is well-formed. Although the wording here could be improved, it does not rise to the level of a defect in the Standard.
In discussing issue 197, the question arose as to whether the handling of fundamental types in argument-dependent lookup is actually what is desired. This question needs further discussion.
Rationale (March, 2011):
There does not seem to be sufficient motivation at this point, with an additional eleven years' experience, to make a change.
I believe the following code example should unambiguously call the member operator+. Am I right?
//--- some library header --- // namespace N1 { template<class T> struct Base { }; template<class T> struct X { struct Y : public Base<T> { // here's a member operator+ Y operator+( int _Off ) const { return Y(); } }; Y f( unsigned i ) { return Y() + i; } // the "+" in question }; } //--- some user code --- // namespace N2 { struct Z { }; template<typename T> // here's another operator+ int* operator+( T , unsigned ) { static int i ; return &i ; } } int main() { N1::X< N2::Z > v; v.f( 0 ); }
My expectation is that 6.5.4 [basic.lookup.argdep] would govern, specifically:
If the ordinary unqualified lookup of the name finds the declaration of a class member function, the associated namespaces and classes are not considered.So I think the member should hide the otherwise-better-matching one in the associated namespace. Here's what compilers do:
Agree with me and call the member operator+: Borland 5.5, Comeau 4.3.0.1, EDG 3.0.1, Metrowerks 8.0, MSVC 6.0
Disagree with me and try to call N2::operator+: gcc 2.95.3, 3.1.1, and 3.2; MSVC 7.0
Simple so far, but someone tells me that 12.2.2.3 [over.match.oper] muddies the waters. There, paragraph 10 summarizes that subclause:
[Note: the lookup rules for operators in expressions are different than the lookup rules for operator function names in a function call, ...In particular, consider the above call to "Y() + unsigned" and please help me step through 12.2.2.3 [over.match.oper] paragraph 3:
... for a binary operator @ with a left operand of a type whose cv-unqualified version is T1 and a right operand of a type whose cv-unqualified version is T2,OK so far, here @ is +, and T1 is N1::X::Y.
three sets of candidate functions, designated member candidates, non-member candidates and built-in candidates, are constructed as follows:[and later are union'd together to get the candidate list]
If T1 is a class type, the set of member candidates is the result of the qualified lookup of T1::operator@ (over.call.func); otherwise, the set of member candidates is empty.So there is one member candidate, N1::X::Y::operator+.
The set of non-member candidates is the result of the unqualified lookup of operator@ in the context of the expression according to the usual rules for name lookup in unqualified function calls (basic.lookup.argdep) except that all member functions are ignored.
*** This is the question: What does that last phrase mean? Does it mean:
a) first apply the usual ADL rules to generate a candidate list, then ignore any member functions in that list (this is what I believe and hope it means, and in particular it means that the presence of a member will suppress names that ADL would otherwise find in the associated namespaces); or
b) something else?
In short, does N2::operator+ make it into the candidate list? I think it shouldn't. Am I right?
John Spicer: I believe that the answer is sort-of "a" above. More specifically, the unqualified lookup consists of a "normal" unqualified lookup and ADL. ADL always deals with only namespace members, so the "ignore members functions" part must affect the normal lookup, which should ignore class members when searching for an operator.
I suspect that the difference between compilers may have to do with details of argument-dependent lookup. In the example given, the argument types are "N1::X<N2::Z>::Y" and "unsigned int". In order for N2::operator+ to be a candidate, N2 must be an associated namespace.
N1::X<N2::Z>::Y is a class type, so 6.5.4 [basic.lookup.argdep] says that its associated classes are its direct and indirect base classes, and its namespaces are the namespaces of those classes. So, its associated namespace is just N1.
6.5.4 [basic.lookup.argdep] also says:
If T is a template-id, its associated namespaces and classes are the namespace in which the template is defined; for member templates, the member template's class; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which any template template arguments are defined; and the classes in which any member templates used as template template arguments are defined. [Note: non-type template arguments do not contribute to the set of associated namespaces. ]First of all, there is a problem with the term "is a template-id". template-id is a syntactic constuct and you can't really talk about a type being a template-id. Presumably, this is intended to mean "If T is the type of a class template specialization ...". But does this apply to N1::X<N2::Z>::Y? Y is a class nested within a class template specialization. In addition, its base class is a class template specialization.
I think this raises two issues:
Notes from the April 2003 meeting:
The ADL rules in the standard sort of look at if they are fully recursive, but in fact they are not; in some cases, enclosing classes and base classes are considered, and in others they are not. Microsoft and g++ did fully-recursive implementations, and EDG and IBM did it the other way. Jon Caves reports that Microsoft saw no noticeable difference (e.g., no complaints from customers internal or external) when they made this change, so we believe that even if the rules are imperfect the way they are in the standard, they are clear and the imperfections are small enough that programmers will not notice them. Given that, it seemed prudent to make no changes and just close this issue.
The template-id issue is spun off as issue 403.
Argument-dependent lookup does not consider the elements of an initializer list used as an argument. This seems inconsistent:
namespace NS { struct X { } ; void f( std::initializer_list<X> ) { } } int main() { NS::X x ; // ADL fails to find NS::f f( {x,x,x} ) ; // OK. ADL finds NS::f auto i = {x,x,x} ; f( i ) ; // Also OK f( std::initializer_list<NS::X>{x,x,x} ) ; }
Rationale (October, 2015):
Argument-dependent lookup makes sense when the arguments correspond to actual parameters of the function. In the case of an initializer list, however, the elements of the initializer list need not bear any relationship to the actual parameter type of the function; instead, they provide values for aggregate initialization or construction of the object being initialized, and there is no reason to expect that that type will have the same associated namespace as the types of the elements of the initializer list.
One would expect to find a definition of the terms “associated class” and “associated namespace” in 6.5.4 [basic.lookup.argdep], but there is none. Note also that “associated class” is used in a different sense in 7.6.10 [expr.eq] bullet 3.6, and that drafting being proposed for other issues also uses the term differently.
Rationale (October, 2015):
CWG felt that the current usage is plain English, not a technical term, and is clear enough.
There is a discrepancy between the syntaxes allowed for defining a constructor and a destructor of a class template. For example:
template <class> struct S { S(); ~S (); }; template <class T> S<T>::S<T>() { } // error template <class T> S<T>::~S<T>() { } // okay
The reason for this is that 6.5.5.2 [class.qual] paragraph 2 says that S::S is “considered to name the constructor,” which is not a template and thus cannot accept a template argument list. On the other hand, the second S in S::~S finds the injected-class-name, which “can be used with or without a template-argument-list” (13.8.2 [temp.local] paragraph 1) and thus satisfies the requirement to name the destructor's class (11.4.7 [class.dtor] paragraph 1).
Would it make sense to allow the template-argument-list in the constructor declaration and thus make the language just a little easier to use?
Rationale (July, 2007):
The CWG noted that the suggested change would be confusing in the case where the class template had both template and non-template constructors.
6.6 [basic.link] paragraph 8 says,
A name with no linkage (notably, the name of a class or enumeration declared in a local scope (6.4.3 [basic.scope.block] )) shall not be used to declare an entity with linkage.This wording does not, but should, prohibit use of an unnamed local type in the declaration of an entity with linkage. For example,
void f() { extern struct { } x; // currently allowed }
Proposed resolution: Change the text in 6.6 [basic.link] paragraph 8 from:
A name with no linkage (notably, the name of a class or enumeration declared in a local scope (6.4.3 [basic.scope.block])) shall not be used to declare an entity with linkage.to:
A name with no linkage (notably, the name of a class or enumeration declared in a local scope (6.4.3 [basic.scope.block])) or an unnamed type shall not be used to declare an entity with linkage.In section 6.6 [basic.link] paragraph 8, add to the example, before the closing brace of function f:
extern struct {} x; // ill-formed
Rationale (10/00): The proposed change would have introduced an incompatibility with the C language. For example, the global declaration
static enum { A, B, C } abc;
represents an idiom that is used in C but would be prohibited under this resolution.
It is unclear to what extent entities without names match across translation units. For example,
struct S { int :2; enum { a, b, c } x; static class {} *p; };
If this declaration appears in multiple translation units, are all these members "the same" in each declaration?
A similar question can be asked about non-member declarations:
// Translation unit 1: extern enum { d, e, f } y; // Translation unit 2: extern enum { d, e, f } y; // Translation unit 3: enum { d, e, f } y;
Is this valid C++? Is it valid C?
James Kanze: S::p cannot be defined, because to do so requires a type specifier and the type cannot be named. ::y is valid C because C only requires compatible, not identical, types. In C++, it appears that there is a new type in each declaration, so it would not be valid. This differs from S::x because the unnamed type is part of a named type — but I don't know where or if the Standard says that.
John Max Skaller: It's not valid C++, because the type is a synthesised, unique name for the enumeration type which differs across translation units, as if:
extern enum _synth1 { d,e,f} y; .. extern enum _synth2 { d,e,f} y;
had been written.
However, within a class, the ODR implies the types are the same:
class X { enum { d } y; };
in two translation units ensures that the type of member y is the same: the two X's obey the ODR and so denote the same class, and it follows that there's only one member y and one type that it has.
(See also issues 132 and 216.)
Rationale (February, 2021):
The resolution of issue 2300 and paper P2115R0 have resolved these questions.
Consider:
namespace { extern "C" void f() { } }
Does f have internal or external linkage? Implementations seem to give f external linkage, but the standard prescribes internal linkage per 6.6 [basic.link] paragraph 4.
Rationale (November, 2016):
The specification is as intended.
6.7.3 [basic.life] and 11.4.7 [class.dtor] discuss explicit management of object lifetime. It seems clear that most object lifetime issues apply to sub-objects (array elements, and data members) as well. The standard supports
struct X { T t } x; T* pt = &x.t; pt->~T(); new(pt) T;
and this kind of behavior is useful in allocators.
However the standard does not seem to prohibit the same operations on base sub-objects.
struct D: B{ ... } d; B* pb = &d; pb->~B(); new(pb) B;
However if B and/or D have virtual member functions or virtual bases, it is unlikely that this code will result in a well-formed D object in current implementations (note that the various lines may be in different functions).
Suggested resolution: 11.4.7 [class.dtor] should be modified so that explicit destruction of base-class sub-objects be made illegal, or legal only under some restrictive conditions.
Rationale (04/01):
Reallocation of a base class subobject is already disallowed by 6.7.3 [basic.life] paragraph 7.
6.7.3 [basic.life] was never adjusted for threads. In particular, it describes what may be done with objects in various intervals. In general when the Standard uses words like “during,” it is referring to intervals defined by “sequenced before” ordering. In this context, however, all the specifications need to use the “happens before” ordering.
Suggested resolution:
Add the following at the beginning of 6.7.3 [basic.life]:
All statements about the ordering of evaluations in this section, using words like “before,” “after,” and “during,” refer to the “happens before” order defined in 6.9.2 [intro.multithread]. [Note: We ignore situations in which evaluations are unordered by “happens before,” since these require a data race (6.9.2 [intro.multithread]), which already results in undefined behavior. —end note]
Rationale (August, 1020):
The text is already in the FCD.
The restrictions in 6.7.3 [basic.life] paragraph 7 on when the storage for an object containing a reference member can be reused seem overly restrictive.
Rationale (August, 2011):
CWG did not find a persuasive use case for a change to the existing rules.
The Standard is self-contradictory regarding which destructor calls end the lifetime of an object. 6.7.3 [basic.life] paragraph 1 says,
The lifetime of an object of type T ends when:
if T is a class type with a non-trivial destructor (11.4.7 [class.dtor]), the destructor call starts, or
the storage which the object occupies is reused or released.
i.e., the lifetime of an object of a class type with a trivial destructor persists until its storage is reused or released. However, 11.4.7 [class.dtor] paragraph 15 says,
Once a destructor is invoked for an object, the object no longer exists; the behavior is undefined if the destructor is invoked for an object whose lifetime has ended (6.7.3 [basic.life]).
implying that invoking any destructor, even a trivial one, ends the lifetime of the associated object. Similarly, 11.9.5 [class.cdtor] paragraph 1 says,
For an object with a non-trivial destructor, referring to any non-static member or base class of the object after the destructor finishes execution results in undefined behavior.
A similar question arises for pseudo-destructors for non-class types.
Notes from the August, 2011 meeting:
CWG will need a paper exploring this topic before it can act on the issue.
Rationale (February, 2021):
The resolution of issue 2256 makes it clear that the destruction of an object, no matter how accomplished, ends its lifetime.
Subclause 6.7.3 [basic.life] bullet 8.5 says that o1 is only transparently replaceable by o2 if
either o1 and o2 are both complete objects, or o1 and o2 are direct subobjects of objects p1 and p2, respectively, and p1 is transparently replaceable by p2.
This disallows most of the intended uses of the transparent replacement rule, including example 3 in 11.5.1 [class.union.general], which is similar to:
union A { int n; string s; }; A a; // Does not transparently replace A::s subobject, because // the created object is a complete object. new (&a.s) string("hello"); string t = a.s;
The rule was changed in response to NB comment US 041 (C++20 CD) in what appears to be an over-reach: US 041 says that a member subobject should not transparently replace an unrelated member subobject, but is silent about complete objects transparently replacing members.
CWG 2023-01-06
Issues 2676 and 2677 were split off from this issue.
Subclause 6.7.2 [intro.object] paragraph 2 specifies that "the created object is a subobject of [the original] containing object" for the example above. This issue is therefore NAD.
The global allocation functions are implicitly declared in every translation unit with exception-specifications (6.7.5.5 [basic.stc.dynamic] paragraph 2). It is not clear what should happen if a replacement allocation function is declared without an exception-specification. Is that a conflict with the implicitly-declared function (as it would be with explicitly-declared functions, and presumably is if the <new> header is included)? Or does the new declaration replace the implicit one, including the lack of an exception-specification? Or does the implicit declaration prevail? (Regardless of the exception-specification or lack thereof, it is presumably undefined behavior for an allocation function to exit with an exception that cannot be caught by a handler of type std::bad_alloc (6.7.5.5.2 [basic.stc.dynamic.allocation] paragraph 3).)
Rationale (November, 2014):
The predeclared allocation functions no longer have an exception-specification, so formally this issue is no longer applicable. As noted in the rationale of issue 1948, however, the intent is that the predeclarations are no different from ordinary declarations, so the replacement functions must have compatible exception-specifications.
Some implementations accept code like
#include <cstddef> // to get size_t
void* operator new(std::size_t) noexcept { ... }
This declaration conflicts with the predeclaration of operator new with no exception-specification.
See also issue 967.
Rationale (November, 2014):
The specification intentionally makes such replacement functions ill-formed.
Speaking of the value returned by an allocation function, 6.7.5.5.2 [basic.stc.dynamic.allocation] paragraph 2 says,
The pointer returned shall be suitably aligned so that it can be converted to a pointer of any complete object type with a fundamental alignment requirement
However, the various “Effects” specifications in 17.6.3 [new.delete] have a different formulation:
...allocate size bytes of storage suitably aligned to represent any object of that size.
These should be reconciled.
Rationale (November, 2016):
The adoption of paper P0035R4 has rendered this issue moot.
For certain data types on some hardware, a given object can be accessed most efficiently with one alignment but can be successfully accessed if allocated at a less-stringent boundary. Should the Standard specify the minimum or the preferred alignment as the value of the alignof?
Rationale (June, 2014):
The existing wording is clear that the result alignof is the minimal alignment. If an operator returning the preferred alignment is desired, that request should be addressed to EWG.
6.7.7 [class.temporary] paragraph 4 seems self-contradictory:
the temporary that holds the result of the expression shall persist until the object's initialization is complete... the temporary is destroyed after it has been copied, before or when the initialization completes.How can it be destroyed "before the initialization completes" if it is required to "persist until the object's initialization is complete?"
Rationale (04/00):
It was suggested that "before the initialization completes" refers to the case in which some part of the initialization terminates by throwing an exception. In that light, the apparent contradiction does not apply.
The resolution of issues 616 and 1213, making the result of a member access or subscript expression applied to a prvalue an xvalue, means that binding a reference to such a subobject of a temporary does not extend the temporary's lifetime. 6.7.7 [class.temporary] should be revised to ensure that it does.
Proposed resolution (February, 2014): [SUPERSEDED]
This issue is resolved by the resolution of issue 1299.
Rationale (February, 2019):
This concern is already covered by 6.7.7 [class.temporary] paragraph 6:
The temporary object to which the reference is bound or the temporary object that is the complete object of a subobject to which the reference is bound persists for the lifetime of the reference if...
If an init-capture binds a const reference to a temporary, is the lifetime of the temporary extended to match that of the lambda? For example,
struct S { ~S(); };
const S f();
auto &&lambda = [&x(f())] () -> auto& { return x; };
auto &y = lambda(); // ok?
Notes from the September, 2013 meeting:
CWG agreed that there is no lifetime extension in this example.
Rationale (June, 2014):
After further consideration, CWG agreed that this example should extend the lifetime of the temporary (because the notional variable is a reference) and that the existing text is clear enough in this regard.
Following the definition in Clause 11 [class] paragraph 4 the following is a valid POD (actually a POD-struct):
struct test { const int i; };
The legality of PODs with const members is also implied by the text of 7.6.2.8 [expr.new] bullet 15.1, sub-bullet 2 and 11.9.3 [class.base.init] bullet 4.2.
6.8 [basic.types] paragraph 3 states that
For any POD type T, if two pointers to T point to distinct objects obj1 and obj2, if the value of obj1 is copied into obj2, using the memcpy library function, obj2 shall subsequently hold the same value as obj1.
[Note: this text was changed by TC1, but the essential point stays the same.]
This implies that the following is required to work:
test obj1 = { 1 }; test obj2 = { 2 }; memcpy( &obj2, &obj1, sizeof(test) );
The memcpy of course changes the value of the const member, surely something that shouldn't be allowed.
Suggested resolution:
It is recommended that 6.8 [basic.types] paragraph 3 be reworded to exclude PODs which contain (directly or indirectly) members of const-qualified type.
Rationale (October, 2004):
9.2.9.2 [dcl.type.cv] paragraph 4 already forbids modifying a const member of a POD struct. The prohibition need not be repeated in 6.8 [basic.types].
6.8 [basic.types] paragraph 11 requires that a class type have a trivial copy constructor in order to be classified as a literal type. This seems overly restrictive; presumably having a constexpr copy constructor would suffice. (Note that a trivial copy constructor is a constexpr constructor according to 9.2.6 [dcl.constexpr] paragraph 4.)
Rationale (June, 2008):
A copy constructor takes a reference as its first parameter, thus no user-declared copy constructor can be constexpr.
Should cv-qualified and cv-unqualified versions of fundamental types be considered to be layout-compatible types?
Rationale (August, 2011):
The purpose of “layout compatible” types in C++ is for C compatibility with respect to the common initial sequence of structs appearing in unions. However, C requires that corresponding members have compatible types, and compatible types must have the same cv-qualification. Consequently, this issue is not a defect.
6.8.2 [basic.fundamental] paragraph 6 states,
As described below, bool values behave as integral types.
This sentence looks definitely out of order: how can a value behave as a type?
Suggested resolution:
Remove the sentence entirely, as it doesn't supply anything that isn't already stated in the following paragraphs and in the referenced section about integral promotion.
Rationale (July, 2007):
This is, at most, an editorial issue with no substantive impact. The suggestion has been forwarded to the project editor for consideration.
6.8.2 [basic.fundamental] paragraph 5 refers to a C header instead of to its C++ equivalent:
...Types char16_t and char32_t denote distinct types with the same size, signedness, and alignment as uint_least16_t and uint_least32_t, respectively, in <stdint.h>, called the underlying types.
Rationale (August, 2011)
This is an editorial issue that has been transmitted to the project editor.
Although 6.8.2 [basic.fundamental] paragraph 7 classifies bool as an integral type, the values of true and false are not specified — only that the results of converting them to another integral type are 1 and 0, respectively. This omission leaves unspecified whether false is an integral null pointer constant or not.
Rationale (February, 2012):
The resolution of issue 903 makes it clear that false is not a null pointer constant.
This issue is for tracking various concerns that are raised in paper N3057.
Rationale (August, 2010):
The paper was voted in in Pittsburgh.
It is not clear from the wording of 6.9.2 [intro.multithread] that different statements in the same function cannot be executed by different threads.
Rationale (September, 2013):
SG-1 determined that the existing wording is clear enough.
According to 6.9.2 [intro.multithread] paragraph 24,
The implementation may assume that any thread will eventually do one of the following:
terminate,
make a call to a library I/O function,
access or modify a volatile object, or
perform a synchronization operation or an atomic operation.
[Note: This is intended to allow compiler transformations such as removal of empty loops, even when termination cannot be proven. —end note]
Some programmers find this liberty afforded to implementations to be disadvantageous; see this blog post for a discussion of the subject.
Rationale (October, 2015)
SG1 reaffirms the original intent of this specification.
According to 6.3 [basic.def.odr] paragraph 5, it is possible for a static data member of a class template to be defined more than once in a given program provided that each such definition occurs in a different translation unit and the ODR is met.
Now consider the following example:
src1.cpp:
#include <iostream> int initializer() { static int counter; return counter++; } int g_data1 = initializer(); template<class T> struct exp { static int m_data; }; template<class T> int exp<T>::m_data = initializer(); int g_data2 = initializer(); extern int g_data3; int main() { std::cout << exp<char>::m_data << ", " << g_data1 << ", " << g_data2 << ", " << g_data3 << std::endl; return 0; }
src2.cpp:
extern int initializer(); int g_data3 = initializer(); template<class T> struct exp { static int m_data; }; template<class T> int exp<T>::m_data = initializer(); void func() { exp<char>::m_data++; }
The specialization exp<char>::m_data is implicitly instaniated in both translation units, hence (13.9.2 [temp.inst] paragraph 1) its initialization occurs. And for both definitions of exp<T>::m_data the ODR is met. According to 6.9.3.2 [basic.start.static] paragraph 1:
Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit.
But for exp<T>::m_data we have two definitions. Does it mean that both g_data1 and g_data3 are guaranteed to be dynamically initialized before exp<char>::m_data?
Suggested Resolution: Insert the following sentence before the last two sentences of 6.3 [basic.def.odr] paragraph 5:
In the case of D being a static data member of a class template the following shall also hold:
- for a given (not explicit) specialization of D initialized dynamically (6.9.3.2 [basic.start.static]), the accumulated set of objects initialized dynamically in namespace scope before the specialization of D shall be the same in every translation unit that contains the definition for this specialization.
Notes from 10/01 meeting:
It was decided that this issue is not linked to issue 270 and that there is no problem, because there is only one instantiation (see 5.2 [lex.phases] paragraph 8).
The subject line pretty much says it all. It's a possibility that hadn't ever occurred to me. I don't see any prohibition in the standard, and I also don't think the possibility introduces any logical inconsistencies. The proper behavior, presumably, would be to go through the list of already-constructed objects (not including the current one, since its constructor wouldn't have finished executing) and destroy them in reverse order. Not fundamentally hard, and I'm sure lots of existing implementations already do that.
I'm just not sure whether the standard was intended to support this, or whether it's just that nobody else thought of it either. If the former, then a non-normative note somewhere in 6.9.3.2 [basic.start.static] might be nice.
Rationale (October 2004):
There is nothing in the Standard to indicate that this usage is prohibited, so it must be presumed to be permitted.
According to 6.9.3.2 [basic.start.static] paragraph 2,
A constant initializer for an object o is an expression that is a constant expression, except that it may also invoke constexpr constructors for o and its subobjects even if those objects are of non-literal class types [Note: such a class may have a non-trivial destructor —end note].
This would be clearer if worded as something like,
A constant initializer for an object o is an expression that would be a constant expression if every constexpr constructor invoked for o and its subobjects were a constructor for a literal class type.
Rationale (February, 2014):
CWG felt that the existing wording is clear enough.
An operator expression can, according to Clause 7 [expr] paragraph 2, require transformation into function call syntax. The reference in that paragraph is to 12.4 [over.oper] , but it should be to 12.2.2.3 [over.match.oper] .
Rationale (04/99): The subsections 12.4.2 [over.unary] , 12.4.3 [over.binary] , etc. of the referenced section are in fact relevant.
The C++ standard says in 7.2.1 [basic.lval], in paragraph 15:
an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union),
Note that it is a literal copy from the C standard, but this is of course not the problem.
In C, union is not defined as an aggregate type. Therefore it is appropriate to say “aggregate or union.” But things changed in C++: aggregate type includes union type now (though not all unions are aggregates), and it becomes clear that the “union” in “aggregate or union” is redundant and should be deleted.
The above cited paragraph could be changed to:
an aggregate type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate)
Rationale (October, 2006):
As noted in the issue, not all unions are aggregates, but those that are not aggregates still allow aliasing. That part of the specification would be lost with the suggested change.
Historically, based on C's treatment, cv-qualification of non-class rvalues has been ignored in C++. With the advent of rvalue references, it's not quite as clear that this is desirable. For example, some implementations are reported to print const rvalue for the following program:
const int bar() { return 5; } void pass_int(int&& i) { printf("rvalue\n"); } void pass_int(const int&& i) { printf("const rvalue\n"); } int main() { pass_int(bar()); }
Rationale (August, 2010):
The current specification is as intended.
According to 7.2.1 [basic.lval] paragraph 1,
An xvalue is the result of certain kinds of expressions involving rvalue references (9.3.4.3 [dcl.ref]).
However, there are now expressions not involving rvalue references whose results are xvalues, e.g., a member access expression in which the left operand is a prvalue.
Rationale (November, 2014):
The cited wording does not preclude other kinds of expressions that result in xvalues. This wording could be expanded editorially if a more extensive coverage is desired.
According to 7.3.2 [conv.lval] paragraph 1, applying the lvalue-to-rvalue conversion to any uninitialized object results in undefined behavior. However, character types are intended to allow any data, including uninitialized objects and padding, to be copied (hence the statements in 6.8.2 [basic.fundamental] paragraph 1 that “For character types, all bits of the object representation participate in the value representation” and in 7.2.1 [basic.lval] paragraph 15 that char and unsigned char types can alias any object). The lvalue-to-rvalue conversion should be permitted on uninitialized objects of character type without evoking undefined behavior.
Rationale (February, 2021):
The Standard now clearly specifies the handling of indeterminate values for unsigned char and std::byte types; see 6.7.4 [basic.indet].
Paragraph 3 of section 7.3.7 [conv.prom] contains a statement saying that if a bit-field is larger than int or unsigned int, no integral promotions apply to it. This phrase needs further clarification, as it is hardly possible to fugure out what it means. See below.
Assuming a machine with a size of general-purpose register equal 32 bits (where a byte takes up 8 bits) and a C++ implementation where an int is 32 bits and a long is 64 bits. And the following snippet of code:
struct ExternalInterface { long field1:36, field2:28; }; int main() { ExternalInterface myinstance = { 0x100000001L, 0x12,}; if(myinstance.field1 < 0x100000002L) { //do something } }
Does the standard prohibit the implementation from promoting field1's value into two general purpose registers? And imposes a burden of using shift machine instructions to work with the field's value? What else could that phrase mean?
Either alternative is implementation specific, so I don't understand why the phrase "If the bit-field is larger yet, no integral promotions apply to it" made it to the standard.
Notes from 10/01 meeting:
The standard of course does not dictate what an implementation might do with regard to use of registers or shift instructions in the generated code. The phrase cited means only that a larger bit-field does not undergo integral promotions, and therefore it retains the type with which it was declared (long in the above example). The Core Working Group judged that this was sufficiently clear in the standard.
Note that 11.4.10 [class.bit] paragraph 1 indicates that any bits in excess of the size of the underlying type are padding bits and do not participate in the value representation. Therefore the field1 bit field in the above example is not capable of holding the indicated values, which require more than 32 bits.
Section 7.3.11 [conv.fpint] paragraph 1 states:
An rvalue of a floating point type can be converted to an rvalue of an integer type. The conversion truncates; that is, the fractional part is discarded.
Here, the concepts of “truncation” and “fractional part” seem to be used without precise definitions. When -3.14 is converted into an integer, is the truncation toward zero or away from zero? Is the fractional part -0.14 or 0.86? The standard seem to give no clear answer to these.
Suggested resolution:
Replace “truncates” with “truncates toward zero.”
Replace “the fractional part” with “the fractional part (where that of x is defined as x-floor(x) for nonnegative x and x-ceiling(x) for negative x);” there should be a better wording for this, or the entire statement “that is, the fractional part is discarded” can be removed, once the meaning of “truncation” becomes unambiguous as above.
Rationale (October, 2006):
The specification is clear enough: “fractional part” refers to the digits following the decimal point, so that -3.14 converted to int becomes -3.
There is no normative requirement regarding the ability of floating-point values to represent integer values exactly; however, 7.3.11 [conv.fpint] paragraph 2 appears to implicitly rely on their ability to represent the values 0 and 1:
If the source type is bool, the value false is converted to zero and the value true is converted to one.
Rationale (October, 2015):
CWG felt that the cited passage should be read as indicating that converting true and false should have the same result as converting 1 and 0 and thus do not imply a requirement that those values be represented exactly.
In the following code, I expect both "null" and "FALSE" to be null pointer constants -- and that the code should compile and output the string "int*" twice to cout:
#include <iostream> using namespace std; void foo(int* p) { cout << "int*" << endl; } int main(void) { const int null = 0; foo(null); const bool FALSE = false; foo(FALSE); }
ISO/IEC 14882-1998 7.3.12 [conv.ptr] states:
An integral constant expression rvalue of integer type that evaluates to zero (called a /null pointer constant/) can be converted to a pointer type.
Stroustrup appears to agree with me -- he states (3rd edition page 88):
In C, it has been popular to define a macro NULL to represent the zero pointer. Because of C++`s tighter type checking, the use of plain 0, rather than any suggested NULL macro, leads to fewer problems. If you feel you must define NULL, use:const int NULL = 0;
However gcc 3.3.1 rejects this code with the errors:
bug.cc:17: error: invalid conversion from `int' to `int*' bug.cc:19: error: cannot convert `const bool' to `int*' for argument `1' to ` void foo(int*)'
I have reported this as a bug (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=13867), but the gcc team states that 4.10 requires that a null pointer constant must be an rvalue -- and no implicit conversion from an lvalue to an rvalue is required (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=396):
a null pointer constant is an integral constant expression rvalue that evaluates to zero [4.10/1] in this case `null' is an lvalue. The standard does not specify that lvalue->rvalue decay happens here, so `null' is not a null pointer constant.
I disagree with the gcc teams interpretation -- I don't see why 7.2.1 [basic.lval] doesn't apply:
Whenever an lvalue appears in a context where an rvalue is expected, the lvalue is converted to an rvalue;
The insertion of the word rvalue appears to have occurred during standardization -- it is not present in either Stroustrup 2nd edition or the 3rd edition. Does the committee deliberately intend to exclude an lvalue as a null pointer constant by adding the word rvalue? If so, it leads to the rather bizarre fact that "null" is not a null pointer constant, but "null + 0" is!
Notes from the March 2004 meeting:
We think this is just a bug in gcc. The const variable does get converted to an rvalue in this context. This case is not really any different than cases like
const int null = 0; int i = null;or
const int i = 1; int a[i];(which are accepted by gcc). No one would argue that the second lines of those examples are invalid because the variables are lvalues, and yet the conversions to rvalue happen implicitly for the same reason cited above -- the contexts require an rvalue.
Currently both implicit (7.3.13 [conv.mem]) and explicit (7.6.1.9 [expr.static.cast]) conversions of pointers to members permit only cases in which the type of the member is the same except for cv-qualification. It would seem reasonable to allow conversions in which one member type is a base class of the other. For example:
struct B { }; struct D: B { }; struct X { D d; }; struct Y: X { }; B Y::* pm = &X::d; // Currently ill-formed: type of d is D, not B
(See also issue 170.)
EWG 2022-11-11
The change is plausible, but needs a paper to EWG.
7.5.5 [expr.prim.lambda] paragraph 2 says,
A closure object behaves as a function object (22.10 [function.objects])...
This linkage to <functional> increases the dependency of the language upon the library and is inconsistent with the definition of “freestanding” in 16.4.2.5 [compliance].
Rationale (July, 2009):
The reference to 22.10 [function.objects] appears in a note, not in normative text, and is intended only to clarify the meaning of the term “function object.” The CWG does not believe that this reference creates any dependency on any library facility.
The following case is ill-formed:
int f (int&); void* f (const int&); int main() { int i; [=] ()-> decltype(f(i)) { return f(i); }; }
The decltype(f(i)) is not of the form decltype((x)), and also not within the body of the lambda, so the special rewriting rule doesn't apply. So, the call in the decltype selects the first overload, and the call in the body selects the second overload, and there's no conversion from void* to int, so the return-statement is ill-formed.
This pattern is likely to arise frequently because of the retrictions on deducing the return type from the body of the lambda.
Daveed Vandevoorde: The pattern may be common, but it probably doesn't matter much in practice. It's most likely that overloaded functions that differ only in the cv-qualification of their parameters will have related return types.
Rationale (October, 2009):
The consensus of the CWG was that this is not a sufficiently important problem to warrant changing the existing specification.
According to 7.5.5 [expr.prim.lambda] paragraph 21,
When the lambda-expression is evaluated, the entities that are captured by copy are used to direct-initialize each corresponding non-static data member of the resulting closure object.
This apparently means that if the capture-default is to copy, entities captured by default, implicitly, are copied even in cases where the copy constructors of such entities are explicit. It should be required that such entities be captured explicitly instead.
See also issue 1020.
Rationale (August, 2010):
The behavior is according to the original design and is similar to what would happen if the constructor of the closure object initialized the members for the captured entities using mem-initializers. CWG did not see sufficient motivation to change the design.
The conditions under which a closure class has a conversion function to a pointer-to-function type are given in 7.5.5 [expr.prim.lambda] paragraph 6:
The closure type for a non-generic lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function...
Does this apply to a lambda whose lambda-capture is empty by virtue of being an empty pack expansion? For example, is the following well-formed?
#include <cstdlib> template <typename ...Args> void foo(Args ...args) { auto xf = [args ...] { }; std::atexit(xf); }
This is likely a violation of the rule in 13.8 [temp.res] paragraph 8,
If every valid specialization of a variadic template requires an empty template parameter pack, the template is ill-formed, no diagnostic required.
Does this need to be clarified?
Rationale (September, 2013):
The statement in 7.5.5 [expr.prim.lambda] paragraph 6 is a syntactic constraint, not a semantic one. The example has a lambda-capture, regardless of its expansion in a given instantiation. This is consistent with the intent expressed in 13.7.4 [temp.variadic] paragraph 6:
When N is zero, the instantiation of the expansion produces an empty list. Such an instantiation does not alter the syntactic interpretation of the enclosing construct...
According to 7.6.1.2 [expr.sub] paragraph 11,
No entity is captured by an init-capture.
It should be made clearer that a variable, odr-used by an init-capture in a nested lambda, is still captured by the containing lambda as a result of the init-capture.
Rationale (October, 2015):
Subsequent edits have removed the offending phraseXS.
Consider the following example:
void f() { thread_local int n = 10; std::thread([&] { std::cout << n << std::endl; }).join(); }
This function prints 0, because:
The lambda does not capture n
n is not initialized on the spawned thread prior to the invocation of the lambda.
Additional note, March, 2016:
SG1 discussed this issue and concluded that the issues should be resolved follows:
If the program would result in a capture by reference of a local thread-local variable, then it is ill-formed.
If the program has a capture by value of a local thread-local variable, then a copy of the value from the calling thread is captured (and initialized in the calling thread, if necessary).
The rationale for #1 is that, if we allowed capture of local thread-locals, some programmers will have one intuition of what to expect and other programmers will have the opposite intuition. It's better to forbid both interpretations. We don't want to say simply that there is no capture by reference of thread-locals, because simply ignoring the local thread-local might result in name-lookup finding a global variable by the same name, which would be very confusing.
Rationale (March, 2017):
Only automatic variables are captured. A lambda accessing a thread-local variable would be ill-formed.
Currently function types with different language linkage are not compatible, and 7.6.1.3 [expr.call] paragraph 1 makes it undefined behavior to call a function via a type with a different language linkage. These features are generally not enforced by most current implementations (although some do) between functions with C and C++ language linkage. Should these restrictions be relaxed, perhaps as conditionally-supported behavior?
Rationale (October, 2012):
CWG felt that this language design question would be better considered by EWG.
EWG 2022-11-11
Any changes in this area should be pursued via a paper to EWG.
Should the determination of array bounds from an initializer, described in 9.4.2 [dcl.init.aggr] paragraph 4, apply to creation of a temporary array using the T{expr} syntax? E.g., is the following example well-formed?
typedef int ARR[]; int* p = ARR{1,2,3};
(See also issues 1300, 1307, and 1326.)
Rationale (October, 2012):
The example is valid, according to the new wording of 9.4.2 [dcl.init.aggr] paragraph 4.
According to 7.6.1.5 [expr.ref] paragraph 4,
If E2 is declared to have type “reference to T,” then E1.E2 is an lvalue...
This applies to rvalue reference types as well as to lvalue reference types, based on the rationale from Clause 7 [expr] paragraph 7 that
In general... named rvalue references are treated as lvalues and unnamed rvalue references to objects are treated as xvalues...
Since a non-static data member has a name, it would appear most naturally to fall into the lvalue category. This makes sense as well from the perspective that the target of such a reference does not bear any necessary correlation with the value category of the object expression; in particular, an xvalue object might have an rvalue reference member referring to a different object from which it would be an error to move.
On the other hand, rvalue reference members have limited utility and are likely only to occur as the result of template argument deduction in the context of perfect forwarding, such as using a std::pair to forward values. In such cases, a first or second member of rvalue reference type would be most naturally treated as having the same value category as that of the object expression. The utility of this usage may outweigh the safety considerations that shaped the current policy.
Rationale (April, 2013):
The design of rvalue references in the language is complex, and CWG felt that an attempt to change the existing rules to accommodate this case ran the risk of breaking other cases. Treating named rvalue reference members as lvalues, consistently with other named rvalue references, is also safer in that it prevents the inadvertent theft of resources from an object to which such a member refers.
Consider:
struct A { template<class T> static int X; }; template<class T> int A::X = T{}; A{}.X<int>; // error A::X<int>; // OK
Implementations seem to reject the class member access, despite 7.6.1.5 [expr.ref] bullet 6.1 stating the contrary.
Rationale (November, 2016):
The specification is as intended.
According to 7.6.1.8 [expr.typeid] paragraphs 2-3,
When typeid is applied to a glvalue whose type is a polymorphic class type (11.7.3 [class.virtual]), the result refers to a std::type_info object representing the type of the most derived object (6.7.2 [intro.object]) (that is, the dynamic type) to which the glvalue refers...
When typeid is applied to an expression other than a glvalue of a polymorphic class type, the result refers to a std::type_info object representing the static type of the expression.
The status of a glvalue of incomplete class type is not clear from this specification. Since it is not known whether an incomplete class type is polymorphic or not, the existing wording could be read either as giving that case undefined behavior or as falling into paragraph 3 and always returning the static type.
The wording for dynamic_cast requires class types to be complete, as does paragraph 4, describing typeid applied to a type-id.
Rationale (December, 2021):
The change was already applied via the editorial review process, with approval from CWG at the 2021-08-24 teleconference.
[Picked up by evolution group at October 2002 meeting.]
Is it okay for a static_cast to drop exception specifications?
void f() throw(int); int main () { static_cast<void (*)() throw()>(f); // Okay? void (*p)() throw() = f; // Error }
The fact that a static_cast is defined, more or less, as an initialization suggests that a check ought to be made.
One tricky point: this is another case where the general rule that the reverse of an implicit cast is allowed as a static_cast bites you -- the reverse conversion doesn't drop exception specifications, and so is okay. Perhaps this should be treated like casting away constness.
Mike Miller comments : I don't think that case can arise. According to 14.5 [except.spec],
An exception-specification shall appear only on a function declarator in a function, pointer, reference, or pointer to member declaration or definition.
We strengthened that in issue 87 (voted to DR status in Copenhagen) to
An exception-specification shall appear only on a function declarator for a function type, pointer to function type, reference to function type, or pointer to member function type that is the top-level type of a declaration or definition, or on such a type appearing as a parameter or return type in a function declarator.
As I read that, you can't put an exception-specification on the type-id in a static_cast, which means that a static_cast can only weaken, not strengthen, the exception specification.
The core WG discussed this at the 10/01 meeting and agreed.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
It is not specified under what conditions an object pointer created by converting a function pointer, as described in 7.6.1.10 [expr.reinterpret.cast] paragraph 8, will be safely-derived, particularly in light of the conditionally-supported, implementation-defined nature of such conversions.
Notes from the March, 2009 meeting:
If this is to be addressed, the result should not be as suggested, i.e., a requirement for implementation documentation appearing only in a note. At the least, such a requirement must be in normative text.
Rationale (July, 2009):
The definition of “safely-derived pointer” is clearly and exclusively formulated in terms of pointers to objects. So no implementation is required to maintain safe pointer derivation through conversion to and from a function-pointer type.
On the other hand, any garbage-collecting implementation is free to treat function pointers the same as object pointers for purposes of collection. This would provide the effect of safe pointer derivation through function-pointer types. An implementation is even free to document this behavior, if it so chooses.
However, converting a pointer to a dynamically-allocated object into a function pointer would be a very strange and almost always pointless and unsafe thing to do. There is no need for the standard to encourage this sort of behavior, even to the extent of adding a note mentioning the possibility.
During the discussion of issue 799, which specified the result of using reinterpret_cast to convert an operand to its own type, it was observed that it is probably reasonable to allow reinterpret_cast between any two types that have the same size and alignment.
Additional note, April, 2015:
It has been suggested that this question may more properly be the province of EWG, especially in light of discussions during the resolution of issue 330.
Rationale (May, 2015):
CWG agreed that this question should be considered from a language design perspective and is thus being referred to EWG.
Rationale (June, 2021):
EWG resolved to close this issue. The bit_cast function addresses some of the use-cases. Supporting other use-cases would need a paper. See vote.
Consider this inconsistency:
void func(long l, float f) { (void)reinterpret_cast<long *>(&l); // ok (void)reinterpret_cast<long>(l); // ok (void)reinterpret_cast<float *>(&f); // ok (void)reinterpret_cast<float>(f); // ill-formed }
Suggested resolution:
Change in 7.6.1.10 [expr.reinterpret.cast] paragraph 2 as follows:
... An expression ofintegralarithmetic, enumeration, pointer, or pointer-to-member type can be explicitly converted to its own type; such a cast yields the value of its operand.
Rationale (November, 2016):
The specification is as intended.
7.6.2.2 [expr.unary.op] paragraph 2 indicates that the type of an address-of-member expression reflects the class in which the member was declared rather than the class identified in the nested-name-specifier of the qualified-id. This treatment is unintuitive and can lead to strange code and unexpected results. For instance, in
struct B { int i; }; struct D1: B { }; struct D2: B { }; int (D1::* pmD1) = &D2::i; // NOT an errorMore seriously, template argument deduction can give surprising results:
struct A { int i; virtual void f() = 0; }; struct B : A { int j; B() : j(5) {} virtual void f(); }; struct C : B { C() { j = 10; } }; template <class T> int DefaultValue( int (T::*m) ) { return T().*m; } ... DefaultValue( &B::i ) // Error: A is abstract ... DefaultValue( &C::j ) // returns 5, not 10.
Suggested resolution: 7.6.2.2 [expr.unary.op] should be changed to read,
If the member is a nonstatic member (perhaps by inheritance) of the class nominated by the nested-name-specifier of the qualified-id having type T, the type of the result is "pointer to member of class nested-name-specifier of type T."and the comment in the example should be changed to read,
// has type int B::*
Notes from 04/00 meeting:
The rationale for the current treatment is to permit the widest possible use to be made of a given address-of-member expression. Since a pointer-to-base-member can be implicitly converted to a pointer-to-derived-member, making the type of the expression a pointer-to-base-member allows the result to initialize or be assigned to either a pointer-to-base-member or a pointer-to-derived-member. Accepting this proposal would allow only the latter use.
Additional notes:
Another problematic example has been mentioned:
class Base { public: int func() const; }; class Derived : public Base { }; template<class T> class Templ { public: template<class S> Templ(S (T::*ptmf)() const); }; void foo() { Templ<Derived> x(&Derived::func); // ill-formed }
In this example, even though the conversion of &Derived::func to int (Derived::*)() const is permitted, the initialization of x cannot be done because template argument deduction for the constructor fails.
If the suggested resolution were adopted, the amount of code broken by the change might be reduced by adding an implicit conversion from pointer-to-derived-member to pointer-to-base-member for appropriate address-of-member expressions (not for arbitrary pointers to members, of course).
(See also issues 247 and 1121.)
Additional notes (September, 2012):
Tomasz Kamiński pointed out three additional motivating examples:
struct Very_base { int a; }; struct Base1 : Very_base {}; struct Base2 : Very_base {}; struct Derived : Base1, Base2 {} int main() { Derived d; int Derived:: * a_ptr = &Derived::Base1::a; //error: Very_base ambiguous despite qualification };
Also:
struct Base { int a; }; struct Derived : Base { int b; }; template<typename Class, typename Member_type, Member_type Base:: * ptr> Member_type get(Class &c) { return c.*ptr; } void call(int (*f)(Derived &)); int main() { call(&get<Derived, int, &Derived::b>); // Works correctly call(&get<Derived, int, &Derived::a>); // Fails because &Derived::a returns an int Base::* // and no conversions are applied to pointer to member // (as specified in 13.4.3 [temp.arg.nontype] paragraph 5) call(&get<Base, int, &Derived::a>); //Template function is instantiated properly but has invalid type }
Finally:
struct Base { int a; }; struct Derived : private Base { public: using Base::a; //make a accessible }; int main() { Derived d; d.a; // valid int Derived::* ptr = &Derived::a; // Conversion from int Base::* to int Derived::* // is ill-formed because the base class is inaccessible }
Rationale (October, 2012):
CWG felt that such a change to the existing semantics would be better considered by EWG rather than as a defect.
Additional note, April, 2015:
EWG has determined that the utility of such a change is outweighed by the fact that it would break code. See EWG issue 89.
In 7.6.2.2 [expr.unary.op], part of paragraph 7 describes how to compute the negative of an unsigned quantity:
The negative of an unsigned quantity is computed by subtracting its value from 2n, where n is the number of bits in the promoted operand. The type of the result is the type of the promoted operand.
According to this method, -0U will get the value 2n - 0 = 2n, where n is the number of bits in an unsigned int. However, 2n is obviously out of the range of values representable by an unsigned int and thus not the actual value of -0U. To get the result, a truncating conversion must be applied.
Rationale (April, 2007):
As noted in the issue description, a “truncating conversion” is needed. This conversion is supplied without need of an explicit mention, however, by the nature of unsigned arithmetic given in 6.8.2 [basic.fundamental] paragraph 4:
Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.
There does not seem to be any significant technical obstacle to allowing a void* pointer to be dereferenced, and that would avoid having to use weighty circumlocutions when casting to a reference to an object designated by such a pointer.
Rationale (June, 2014):
This request for a language extension should be evaluated by EWG before any action is taken.
EWG 2022-11-11
This is a request for a new feature, which should be proposed in a paper to EWG.
Paper P0913R0 proposed adding support for a symmetric-transfer capability intended to allow coroutines to be recursively resumed without consuming an unbounded amount of stack space. However, the current wording does not required this, only suggesting it in a note in bullet 5.1.1 of 7.6.2.4 [expr.await]. This should be a normative requirement.
Rationale (July, 2020):
This issue is essentially about implementation limits, which are impossible to specify normatively, and it is inappropriate to specify the desired and forbidden implementation techniques in the Standard.
According to 7.6.2.4 [expr.await] bullets 3.3 and 3.4,
Evaluation of an await-expression involves the following auxiliary types, expressions, and objects:
...
o is determined by enumerating the applicable operator co_await functions for an argument a (12.2.2.3 [over.match.oper]), and choosing the best one through overload resolution (12.2 [over.match]). If overload resolution is ambiguous, the program is ill-formed. If no viable functions are found, o is a. Otherwise, o is a call to the selected function with the argument a. If o would be a prvalue, the temporary materialization conversion (7.3.5 [conv.rval]) is applied.
e is an lvalue referring to the result of evaluating the (possibly-converted) o.
...
However, the temporary materialization conversion produces an xvalue, not an lvalue. Should e be a glvalue instead of an lvalue?
Rationale (February, 2021):
The specification is as intended; o is converted to an lvalue if it is an xvalue result of the temporary materialization conversion. e is used in both bullets 3.7 and 3.8; if it were an xvalue instead of an lvalue, the call to await_suspend could steal e's resources and leave the call to await_resume with a defunct object, which would be undesirable.
The standard forbids a lambda from appearing in a sizeof operand:
A lambda-expression shall not appear in an unevaluated operand (Clause 7 [expr]).
(7.5.5 [expr.prim.lambda] paragraph 2). However, there appears to be no prohibition of the equivalent usage when a variable or data member has a closure class as its type:
int main() { int i = 1; int j = 1; auto f = [=]{ return i + j;}; return sizeof(f); }
According to 7.5.5 [expr.prim.lambda] paragraph 3, the size of a closure class is not specified, so that it could vary between translation units or between otherwise link-compatible implementations, which could result in ODR violations if the size is used as a template non-type argument, for example. Should the Standard forbid taking the size of a closure class? Or should this simply be left as an ABI issue, as is done with other size and alignment questions?
Additional note, April, 2013:
It was observed that generic function wrappers like std::function rely on the ability to make compile-time decisions based on the size of the function object, and forbidding the application of sizeof to closure classes would make that unnecessarily difficult.
Rationale (April, 2013):
CWG agreed that the ODR and portability considerations were not sufficient to outweigh the utility of applying sizeof to closure classes as mentioned in the April, 2013 note and that the issues are more appropriately dealt with in an ABI specification.
According to 7.6.2.5 [expr.sizeof] paragraph 1,
The sizeof operator shall not be applied to an expression that has function or incomplete type, to an enumeration type whose underlying type is not fixed before all its enumerators have been declared, to an array of runtime bound, to the parenthesized name of such types, or to a glvalue that designates a bit-field.
However, it is not possible to name the type of an array of runtime bound, neither by typedef nor decltype, so the reference to “the parenthesized name of such types” should precede rather than follow “to an array of unknown bound.”
Rationale (September, 2013):
Arrays of runtime bound were moved from the normative specification to a proposed Technical Specification.
The current specification of the alignof operator (7.6.2.6 [expr.alignof]) allows it to be applied only to types, not to objects. Since the align attribute may be applied to objects, and since existing practice permits querying the alignment of objects, it should be considered whether to allow this in Standard C++ as well.
Additional note, April, 2020:
A survey of current implementations shows that most have already implemented the extension; the example there illustrates one motivation for its use. The principle of least astonishment would suggest that it is surprising for sizeof and alignof to behave differently in this regard.
Additional note (April, 2022)
This is a request for an extension, which is pursued by paper P2152 (Querying the alignment of an object).
According to 7.6.2.7 [expr.unary.noexcept] paragraph 2,
The result of the noexcept operator is a constant of type bool and is an rvalue.
Obviously, the result should be a prvalue.
(See also issue 1642, which deals with missing specifications of value categories.)
Notes from the September, 2013 meeting:
This issue is being handled editorially and is being placed in "review" status to ensure that the change has been made.
Rationale (February, 2014):
The change has been made editorially.
Section 11.4.11 [class.free] paragraph 4 says:
If a delete-expression begins with a unary :: operator, the deallocation function's name is looked up in global scope. Otherwise, if the delete-expression is used to deallocate a class object whose static type has a virtual destructor, the deallocation function is the one found by the lookup in the definition of the dynamic type's virtual destructor (11.4.7 [class.dtor] ). Otherwise, if the delete-expression is used to deallocate an object of class T or array thereof, the static and dynamic types of the object shall be identical and the deallocation function's name is looked up in the scope of T. If this lookup fails to find the name, the name is looked up in the global scope. If the result of the lookup is ambiguous or inaccessible, or if the lookup selects a placement deallocation function, the program is ill-formed.I contrast that with 7.6.2.8 [expr.new] paragraphs 16 and 17:
If the new-expression creates an object or an array of objects of class type, access and ambiguity control are done for the allocation function, the deallocation function (11.4.11 [class.free] ), and the constructor (11.4.5 [class.ctor] ). If the new-expression creates an array of objects of class type, access and ambiguity control are done for the destructor (11.4.7 [class.dtor] ).I think nothing in the latter paragraphs implies that the deallocation function found is the same as that for a corresponding delete-expression. I suspect that may not have been intended and that the lookup should occur "as if for a delete-expression".If any part of the object initialization described above terminates by throwing an exception and a suitable deallocation function can be found, the deallocation function is called to free the memory in which the object was being constructed, after which the exception continues to propagate in the context of the new-expression. If no unambiguous matching deallocation function can be found, propagating the exception does not cause the object's memory to be freed. [Note: This is appropriate when the called allocation function does not allocate memory; otherwise, it is likely to result in a memory leak. ]
Rationale:
Paragraphs 16 through 18 are sufficiently correct and unambiguous as written.
Clause 7 [expr] paragraph 4 appears to grant an implementation the right to generate code for a function call like
f(new T1, new T2)in the order
Suggested resolution: either forbid the ordering above or expand the requirement for reclaiming storage to include exceptions thrown in all operations between the allocation and the completion of the constructor.
Rationale (10/99): Even in the "traditional" ordering of the calls to allocation functions and constructors, memory can still leak. For instance, if T1 were successfully constructed and then the construction of T2 were terminated by an exception, the memory for T1 would be lost. Programmers concerned about memory leaks will avoid this kind of construct, so it seems unnecessary to provide special treatment for it to avoid the memory leaks associated with one particular implementation strategy.
A new-expression that creates an object whose type is a specialization of std::initializer_list initialized from an initializer list results in undefined behavior if the object survives past the end of the full-expression, when the lifetime of the underlying array object ends. Since such a new-expression is effectively useless, should it be made ill-formed?
Notes from the October, 2012 meeting:
The consensus of CWG was that this usage should be ill-formed.
Rationale (February, 2013):
Because the library emulation of std::is_constructible uses unevaluated new-expressions in the implementation, making a new of std::initializer_list ill-formed would give the wrong results for its constructibility. CWG determined that it would be acceptable to leave diagnosing of actual undefined behavior resulting from such constructs to the discretion of the implementation.
It is currently undefined behavior to delete a derived-class object via a pointer to a base class unless the base class has a virtual destructor. It has been suggesed that this could be allowed for a standard-layout class. If so, presumably the caveats about a deallocation function or non-trivial destructor found in 7.6.2.9 [expr.delete] paragraph 5 that currently apply to incomplete types would need to be extended to apply to the derived class in such cases.
Another objection that was raised is that such a change would make it more difficult to extend C++ in the future to have global deallocation functions that can take the size of the object being deleted as an argument, as is currently possible for member deallocation functions.
Rationale (August, 2011):
The specification is as intended; changes to the restriction would need to be considered in a larger context by EWG.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 99.
According to 7.6.2.9 [expr.delete] paragraph 10, deletion of an array of a class with both sized and non-sized deallocation functions is not required to call the sized version if the destructor is trivial:
If deallocation function lookup finds both a usual deallocation function with only a pointer parameter and a usual deallocation function with both a pointer parameter and a size parameter, the function to be called is selected as follows:
If the type is complete and if, for the second alternative (delete array) only, the operand is a pointer to a class type with a non-trivial destructor or a (possibly multi-dimensional) array thereof, the function with two parameters is selected.
Otherwise, it is unspecified which of the two deallocation functions is selected.
However, if only a sized deallocation function is specified as a class-specific deallocation function, it is not clear how the size argument is to be determined if the class has a trivial destructor.
Rationale (November, 2016):
The adoption of paper P0035R4 has rendered this issue moot.
According to 7.6.3 [expr.cast] paragraph 4, one possible interpretation of an old-style cast is as a static_cast followed by a const_cast. One would therefore expect that the expressions marked #1 and #2 in the following example would have the same validity and meaning:
struct S { operator const int* (); }; void f(S& s) { const_cast<int*>(static_cast<const int*>(s)); // #1 (int*) s; // #2 }
However, a number of implementations issue an error on #2.
Is the intent that (T*)x should be interpreted as something like
const_cast<T*>(static_cast<const volatile T*>(x))
Rationale (July, 2009):
According to the straightforward interpretation of the wording, the example should work. This appears to be just a compiler bug.
According to 7.6.4 [expr.mptr.oper] paragraph 6,
The result of a .* expression whose second operand is a pointer to a data member is of the same value category (7.2.1 [basic.lval]) as its first operand.
This is incorrect if the member has a reference type, in which case the result is an lvalue.
Rationale (September, 2010):
There are no pointers to member of reference type; see 9.3.4.4 [dcl.mptr] paragraph 3.
An expression of the form pointer + enum (see paragraph 5) is not given meaning, and ought to be, given that paragraph 2 of this section makes it valid. Presumably, the enum value should be converted to an integral value, and the rest of the processing done on that basis. Perhaps we want to invoke the integral promotions here.
[Should this apply to (pointer - enum) too?]
Rationale (04/99): Paragraph 1 invokes "the usual arithmetic conversions" for operands of enumeration type.
(It was later pointed out that the builtin operator T* operator+(T*, ptrdiff_t) (12.5 [over.built] paragraph 13) is selected by overload resolution. Consequently, according to 12.2.2.3 [over.match.oper] paragraph 7, the operand of enumeration type is converted to ptrdiff_t before being interpreted according to the rules in 7.6.6 [expr.add] .)
Code that was portable in C90 and C++98 is no longer portable with the introduction of data types longer than long; code that could previously cast size_t and ptrdiff_t to long without loss of precision (because long was the largest type) can no longer rely on that idiom.
The CWG discussed this during the Berlin (April, 2006) meeting. The general consensus was that this was unavoidable: there are valid reasons for implementations to keep long at a size less than that required for address arithmetic.
See paper J16/06-0053 = WG21 N1983, which also suggests the possibility of required diagnostics for problematic cases as an alternative to restricting the size of size_t and ptrdiff_t.
Rationale (October, 2006):
This is not an area in which the Standard should override the decisions of implementors who wish to maintain the size of long for backward compatibility but need a larger size_t to deal with expanded address spaces. Also, diagnostics of the sort described are better treated as quality of implementation issues rather than topics for standardization.
According to 6.8 [basic.types] paragraph 4,
The object representation of an object of type T is the sequence of N unsigned char objects taken up by the object of type T, where N equals sizeof(T).
and 6.7.2 [intro.object] paragraph 5,
An object of trivially copyable or standard-layout type (6.8 [basic.types]) shall occupy contiguous bytes of storage.
Do these passages make pointer arithmetic (7.6.6 [expr.add] paragraph 5) within a standard-layout object well-defined (e.g., for writing one's own version of memcpy?
Rationale (August, 2011):
The current wording is sufficiently clear that this usage is permitted.
A shift of zero bits should result in the left operand regardless of its sign. However, the current wording of 7.6.7 [expr.shift] paragraph 2 makes it undefined behavior.
Notes from the February, 2016 meeting:
CWG felt that a reasonable approach might be to define <<N as equivalent to multiplying by 2N in all cases; see also the resolution of issue 1457. The resolution of this question should also be coordinated with SG6 and SG12, as well as with WG14.
Rationale (February, 2019):
This issue is resolved by the adoption of paper P1236.
The relational operators have unspecified results when comparing pointers that refer to objects that are not members of the same object or elements of the same array (7.6.9 [expr.rel] paragraph 2, second bullet). This restriction (which dates from C89) stems from the desire not to penalize implementations on architectures with segmented memory by forcing them essentially to simulate a flat address space for the purpose of these comparisons. If such an implementation requires that objects and arrays to fit within a single segment, this restriction enables pointer comparison to be done simply by comparing the offset portion of the pointers, which could be much faster than comparing the full pointer values.
The problem with this restriction in C++ is that it forces users of the Standard Library containers to use less<T*> instead of the built-in < operator to provide a total ordering on pointers, a usage that is inconvenient and error-prone. Can the existing restriction be relaxed in some way to allow the built-in operator to provide a total ordering? (John Spicer pointed out that the actual comparison for a segmented architecture need only supply a total ordering of pointer values, not necessarily the complete linearization of the address space.)
Rationale (April, 2007):
The current specification is clear and was well-motivated. Analysis of whether this restriction is still needed should be done via a paper and discussed in the Evolution Working Group rather than being handled by CWG as an issue/defect.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 91.
According to 7.6.10 [expr.eq] paragraph 2, two function pointers only compare equal if they point to the same function. However, as an optimization, implementations are currently aliasing functions that have identical definitions. It is not clear whether the Standard needs to deal explicitly with this optimization or not.
Rationale (February, 2012):
The Standard is clear on the requirements, and implementations are free to optimize within the constraints of the “as-if” rule.
According to 7.6.10 [expr.eq] bullet 2.1,
Comparing pointers is defined as follows:
If one pointer represents the address of a complete object, and another pointer represents the address one past the last element of a different complete object87, the result of the comparison is unspecified.
The use of the term “complete object” is confusing. A complete object is one that is not a subobject of any other object (6.7.2 [intro.object] paragraph 2), so this restriction apparently does not apply to non-static data members. Is the following guaranteed to work?
struct S { int i[2]; int j[2]; }; constexpr bool check1() { S s = { { 1, 2 }, { 3, 4 } }; return &s.i[2] == &s.j[0]; } static_assert(check1(), "Guaranteed?");
In particular, is there a guarantee that there is no padding between nonstatic data members of the same type?
Rationale (July, 2017):
CWG determined that the existing wording is correct: the result of the comparison is implementation-defined, but not unspecified, so the program is well-formed but the assertion is not guaranteed to pass.
P0145 caused this situation:
extern "C" void abort(); struct A { int i; int data[10000]; } a; A& aref() { a.i++; return a; } int main() { aref() = a; if (a.i != 0) abort(); }
Is a.i now required to be 0?
A related example is this:
int b; int& bref() { ++b; return b; } int main() { bref() = b; if (b != 0) abort(); }
Here, b is required to be 0 after the assignment, because the value computation of the RHS of the assignment is sequenced before any side-effects on the LHS. The difference in guaranteed behavior between class and non-class types is disturbing.
Rationale (April, 2017):
Class copy assignment binds a const T&, so the A example actually yields a.i == 1 after the assignment.
Consider:
int* p = false; // Well-formed? int* q = !1; // What about this?>From 6.8.2 [basic.fundamental] paragraph 6: "As described below, bool values behave as integral types."
From 7.3.12 [conv.ptr] paragraph 1: "A null pointer constant is an integral constant expression rvalue of integer type that evaluates to zero."
From 7.7 [expr.const] paragraph 1: "An integral constant-expression can involve only literals, enumerators, const variables or static members of integral or enumeration types initialized with constant expressions, ..."
In 5.13.2 [lex.icon] : No mention of true or false as an integer literal.
From 5.13.6 [lex.bool] : true and false are Boolean literals.
So the definition of q is certainly valid, but the validity of p depends on how the sentence in 7.7 [expr.const] is parsed. Does it mean
If the latter, then (3.0 < 4.0) is a constant expression, which I don't think we ever wanted. If the former, though, we have the anomalous notion that true and false are not constant expressions.
Now, you may argue that you shouldn't be allowed to convert false to a pointer. But what about this?
static const bool debugging = false; // ... int table[debugging? n+1: n];Whether the definition of table is well-formed hinges on whether false is an integral constant expression.
I think that it should be, and that failure to make it so was just an oversight.
Rationale (04/99): A careful reading of 7.7 [expr.const] indicates that all types of literals can appear in integral constant expressions, but floating-point literals must immediately be cast to an integral type.
Does an explicit temporary of an integral type qualify as an integral constant expression? For instance,
void* p = int(); // well-formed?
It would appear to be, since int() is an explicit type conversion according to 7.6.1.4 [expr.type.conv] (at least, it's described in a section entitled "Explicit type conversion") and type conversions to integral types are permitted in integral constant expressions (7.7 [expr.const]). However, this reasoning is somewhat tenuous, and some at least have argued otherwise.
Note (March, 2008):
This issue should be closed as NAD as a result of the rewrite of 7.7 [expr.const] in conjunction with the constexpr proposal.
Rationale (August, 2011):
As given in the preceding note.
According to 7.7 [expr.const] paragraph 1,
In particular, except in sizeof expressions, functions, class objects, pointers, or references shall not be used, and assignment, increment, decrement, function-call, or comma operators shall not be used.
Given a case like
enum E { e }; int operator+(int, E); int i[4 + e];
does this mean that the overloaded operator+ is not considered (because it can't be called), or is it selected by overload resolution, thus rendering the program ill-formed?
Rationale (April, 2005):
All expressions, including constant expressions, are subject to overload resolution. The example is ill-formed.
typeid expressions can never be constant, whether or not the operand is a polymorphic class type. The result of the expression is a reference, and the typeinfo class that the reference refers to is polymorphic, with a virtual destructor - it can never be a literal type.
Rationale (July, 2009):
The intent of this specification was that the address of such a typeinfo object could be treated as an address constant and thus usable in constant initialization (contrary to the statement in the comment, the result of typeid is an lvalue, not a reference).
reinterpret_cast was forbidden in constant expressions to prevent type-punning operations at compile time. This may have been too strict, as there are uses for the operator that do not involve type punning. For example, a portable implementation of the addressof function could be written as a constexpr function if reinterpret_cast could be used in a constant expression.
Rationale (October, 2012):
Although reinterpret_cast was permitted in address constant expressions in C++03, this restriction has been implemented in some compilers and has not proved to break significant amounts of code. CWG deemed that the complications of dealing with pointers whose tpes changed (pointer arithmetic and dereference could not be permitted on such pointers) outweighed the possible utility of relaxing the current restriction.
The following example appears to be ill-formed, although current implementations accept it:
template<bool> struct S { }; S<0> s;
The reason this is ill-formed is that the non-type template argument is a converted constant expression of type bool (see 13.4.3 [temp.arg.nontype] paragraph 5), and the second conversion in the implicit conversion sequence is a boolean conversion, which is not allowed in the conversion for a converted constant expression (see 7.7 [expr.const] paragraph 3) . Conversions in the other direction (from bool to integer types) are permitted here, since they're integral promotions.
Rationale (February, 2012):
The analysis is correct, and the example is ill-formed. Implementations that accept it are in error.
Some classes that would produce a constant when initialized by value-initialization are not considered literal types. For example:
struct A { int a; }; // non-constexpr default constructor struct B : A {}; // non-literal type constexpr int i = B().a; // OK, trivial constructor not called constexpr B b = b (); // error, constexpr object of non-literal type
Additional note (February, 2017):
This is effectively issue 644, which was resolved once, then had its resolution backed out via the resolution of issue 1071 (actual drafting in issue 981).
Rationale (February, 2021):
The adoption of paper P1331R2 (at the July, 2019 meeting) rendered the question in the issue moot, as the requirement that a constexpr constructor initialize all its non-static data members was removed, so the defaulted B default constructor is now constexpr.
It appears that the current specification of constant expressions in 7.7 [expr.const] pararaph 2 permits examples like
constexpr const char* p = "asdf"; constexpr char ch = p[2];
This seems unnecessarily complicated for both users and implementers. If subscripting were defined directly, rather than in terms of pointer arithmetic and indirection (see issue 1213), we could still support the obvious cases of things like
constexpr char ch2 = "asdf"[2];
without requiring compilers and users to track the target of address-constant pointers and references.
Rationale (October, 2012):
CWG was comfortable with this implication of the current wording.
A const integer initialized with a constant can be used in constant expressions, but a const floating point variable initialized with a constant cannot. This was intentional, to be compatible with C++03 while encouraging the consistent use of constexpr. Some people have found this distinction to be surprising, however.
It was also observed that allowing const floating point variables as constant expressions would be an ABI-breaking change, since it would affect lambda capture.
One possibility might be to deprecate the use of const integral variables in constant expressions.
Additional note, April, 2015:
EWG requested CWG to allow use of const floating-point variables in constant expressions.
Rationale (May, 2015):
CWG felt that the current rules should not be changed and that programmers desiring floating point values to participate in constant expressions should use constexpr instead of const.
In an example like
extern int const x; struct A { constexpr A () { } int value = x; }; int const x = 123; constexpr A a;
it is not clear whether the constructor for A is well-formed (because the initialization for x has not yet been seen) and whether that constant value is used in the initialization of a. There is implementation divergence on these questions.
Rationale (June, 2014):
The requirements for a constexpr constructor in 9.2.6 [dcl.constexpr] do not require that an initializer be constant at the point of definition, similar to the provision for mutually-recursive constexpr functions, which require that at least one of the functions will contain a reference to a not-yet-defined constexpr function. Determination of whether an expression is constant or not is made in the context in which the expression appears, by which time the constant value of x in the exampe above is known. CWG feels that the current wording is clear enough that the example is well-formed.
There is implementation divergence on the handling of typeid in constant expressions, for example:
static_assert(&typeid(int) == &typeid(int), ""); // #1
According to the current wording, it is unspecified whether two evaluations of the typeid operator produce the same result, even though typeid can be used in constant expressions as long as its operand is not a glvalue of a polymorphic class type. Of particular concern is the case where typeid might be evaluated in different translation units.
Rationale (November, 2014):
Because the result of two separate evaluations of the typeid operator are not guaranteed to produce the same result, the comparison in the example is not permitted in a constant expression.
Consider an example like:
constexpr int f() { return 5; } // function must be constexpr constexpr int && q = f(); // but result is not constant constexpr int const & r = 2; // temporary is still not constant int main() { q = 11; // OK const_cast< int & >( r ) = 3; // OK (temporary object is not ROMable) constexpr int && z = 7; // Error? Temporary does not have static storage duration? }
A constexpr reference must be initialized by a constant expression (9.2.6 [dcl.constexpr] paragraph 9), yet it may refer to a modifiable temporary object. Such a temporary is guaranteed static initialization, but it's not ROMable.
A non-const constexpr reference initialized with an lvalue expression is useful, because it indicates that the underlying storage of the reference may be statically initialized, or that no underlying storage is required at all.
When the initializer is a temporary, finding its address is trivial. There is no reason to declare any intent the computation of its address. On the other hand, an initial value is provided, and that is also required to be a constant expression, although it's never treated as a constant.
The situation is worse for local constexpr references. The initializer generates a temporary when the declaration is executed. The temporary is a locally scoped, unique object. This renders constexpr meaningless, because although the address computation is trivial, it still must be done dynamically.
C++11 constexpr references required initialization by reference constant expressions, which had to “designate an object with static storage duration or a function” (C++11 7.7 [expr.const] paragraph 3). A temporary with automatic storage duration granted by the reference fails this requirement.
C++14 removes reference constant expressions and the static storage requirement, rendering the program well-defined with an apparently defeated constexpr specifier. (GCC and Clang currently provide the C++11 diagnosis.)
Suggested resolution: a temporary bound to a constexpr reference should itself be constexpr, implying const-qualified type. Forbid binding a constexpr reference to a temporary unless both have static storage duration. (In local scope, the static specifier fixes the issue nicely.)
Rationale (November, 2014):
This issue is already covered by 7.7 [expr.const] paragraph 4, which includes conversions and temporaries in the analysis.
There is implementation variance in the treatment of the following example:
constexpr int f(int x) { return x; } int main() { struct { int x = f(x = 37); } constexpr a = { }; }
Is the assignment to x considered to satisfy the requirememts of 7.7 [expr.const] bullet 2.17,
modification of an object (7.6.19 [expr.ass], 7.6.1.6 [expr.post.incr], 7.6.2.3 [expr.pre.incr]) unless it is applied to a non-volatile lvalue of literal type that refers to a non-volatile object whose lifetime began within the evaluation of e;
assuming that e is the full-expression encompassing the initialization of a?
Notes from the October, 2018 teleconference:
This kind of example was previously ill-formed but it was inadvertently allowed by the change to the “non-vacuous initialization” rule. That rule should be restricted to class and array types, making this example again ill-formed.
Rationale, February, 2021:
The resolution of 2256 makes clear that the lifetime of x has not begun because its initialization is not yet complete, so the assignment is undefined behavior and thus ill-formed in a constant expression.
According to Clause 8 [stmt.stmt] paragraph 3,
A name introduced by a declaration in a condition (either introduced by the decl-specifier-seq or the declarator of the condition) is in scope from its point of declaration until the end of the substatements controlled by the condition. If the name is redeclared in the outermost block of a substatement controlled by the condition, the declaration that redeclares the name is ill-formed.
This does not exempt class and enumeration names, which can ordinarily coexist with non-type names in the same scope (_N4868_.6.4.10 [basic.scope.hiding] paragraph 2) . However, there is implementation variance in the handling of examples like:
void g() {
if (int N = 3) {
struct N { } n; // ill-formed but not diagnosed by some implementations
}
}
Should the rule for conditions be updated to allow for this case?
Rationale (April, 2018):
Hiding of tag names by non-type names was added for C compatibility. C does not support conditions, and there was no consensus to extend the tag/non-type hiding rules into contexts where C compatibility is not required.
According to Clause 8 [stmt.stmt] paragraph 3,
A name introduced by a declaration in a condition (either introduced by the decl-specifier-seq or the declarator of the condition) is in scope from its point of declaration until the end of the substatements controlled by the condition. If the name is redeclared in the outermost block of a substatement controlled by the condition, the declaration that redeclares the name is ill-formed.
Should there be a similar rule about redeclaring names introduced by init-statements?
Notes from the April, 2018 teleconference:
CWG agreed that such a rule should be added.
Rationale (January, 2019):
The init-statement case is covered by 6.4.3 [basic.scope.block] paragraph 3. These two references should be harmonized and cross-referenced appropriately as an editorial change.
Consider:
template<typename Iter> void f(Iter a, Iter b) { const int v = 10; auto do_something = [&] (auto thing) { if constexpr (is_random_access_iterator<Iter> && is_integral<decltype(thing)>) *(a + 1) = v; }; do_something(5); do_something("foo"); }
Determining whether v is captured requires instantiating the "if constexpr", but that results in a hard error for a statement that will eventually be discarded.
Rationale (February, 2018):
These questions were resolved by the adoption of paper P0588R1 at the November, 2017 meeting.
The effect of constexpr if in non-templated code is primarily limited to not requiring definitions for entities that are odr-used in discarded statements. This eliminates a plausible implementation techique of simply skipping the tokens of a discarded statement. Should the Standard allow such an approach? One needed change might be to say that all diagnosable rules become “no diagnostic required” inside discarded statements.
Rationale (April, 2018):
The design was thoroughly discussed before adopting the feature and the current specification was intentionally adopted. Any request for a change should go through the normal EWG process at this point.
The expansion of a range-based for in 8.6.5 [stmt.ranged] paragraph 1 involves a declaration of the form
auto && __range = range-init;
However, it is not permitted to bind a reference to an array of runtime bound (9.3.4.3 [dcl.ref] paragraph 5), even though it is intended that such arrays can be used in a range-based for.
Rationale (September, 2013):
Arrays of runtime bound were moved from the normative specification to a proposed Technical Specification.
When jumping past initialization of a local static variable the value of the static becomes indeterminate. Seems like this behavior should be illegal just as it is for local variables with automatic linkage.
Here is an example:
struct X { X(int i) : x(i) {} int x; }; int f(int c) { if (c) goto ly; // error here for jumping past next stmt. static X a = 1; ly: return a.x; // either 1 or 0 depending on implementation. }
8.8 [stmt.dcl] P3 should be changed to:
A program that jumps from a point where a local variable with automatic or static storage duration is not in scope to a point where it is in scope is ill-formed unless the variable has POD type (3.9) and is declared without an initializer (8.5).This would imply "static X a = 1;" should be flagged as an error. Note that this behavior a may be a "quality of implementation issue" which may be covered in 6.7 P4. Paragraph 4 seems to make the choice of static/dynamic initialization indeterminate. Making this an error and thus determinate seems the correct thing to do since that is what is already required of automatic variables.
Steve Adamczyk: Some version of this may be appropriate, but it's common to have code that is executed only the first time it is reached, and to have an initialization of a static variable inside such a piece of code. In such a case, on executions after the first there is indeed a jump over the declaration, but the static variable is correctly initialized -- it was initialized the first time the routine was called.
void f() { static bool first_time = true; if (!first_time) goto after_init; static int i = g(); first_time = false; after_init: ... }
Rationale (October, 2004):
The CWG sees no reason to change this specification. Local static variables are different from automatic variables: automatic variables, if not explicitly initialized, can have indeterminate (“garbage”) values, including trap representations, while local static variables are subject to zero initialization and thus cannot have garbage values.
The latitude granted to implementations regarding performing dynamic initialization of local static objects as if it were static initialization is exactly parallel to namespace scope objects (6.9.3.2 [basic.start.static]), as are the restrictions on programmer assumptions.
Because a definition is also a declaration, it might make sense to change uses of "declaration or definition" to simply "declaration".
Notes from the March 2004 meeting:
Jens Maurer prepared drafting for this issue, but we find ourselves reluctant to actually make the changes. Though correct, they seemed more likely to be misread than the existing wording.
Proposed resolution:
Remove in Clause 3 [intro.defs] “parameter” the indicated words:
an object or reference declared as part of a function declarationor definition, or in the catch clause of an exception handler, that acquires a value on entry to the function or handler; ...
Remove in 13.2 [temp.param] paragraph 10 the indicated words:
The set of default template-arguments available for use with a template declarationor definitionis obtained by merging the default arguments from the definition (if in scope) and all declarations in scope in the same way default function arguments are (...).
Remove in 13.8 [temp.res] paragraph 2 the indicated words:
A name used in a template declarationor definitionand that is dependent on a template-parameter is assumed not to name a type unless the applicable name lookup finds a type name or the name is qualified by the keyword typename.
Remove in 13.8.4.1 [temp.point] paragraph 1 the indicated words:
Otherwise, the point of instantiation for such a specialization immediately follows the namespace scope declarationor definitionthat refers to the specialization.
Remove in 13.8.4.1 [temp.point] paragraph 3 the indicated words:
Otherwise, the point of instantiation for such a specialization immediately precedes the namespace scope declarationor definitionthat refers to the specialization.
Remove in 13.9.4 [temp.expl.spec] paragraph 21 the indicated words:
Default function arguments shall not be specified in a declarationor a definitionfor one of the following explicit specializations:[Note: default function arguments may be specified in the declaration
- ...
or definitionof a member function of a class template specialization that is explicitly specialized. ]
Remove in 13.10.3.6 [temp.deduct.type] paragraph 18 the indicated words:
[Note: a default template-argument cannot be specified in a function template declarationor definition; ...]
Remove in 16.4.3.2 [using.headers] paragraph 3 the indicated words:
A translation unit shall include a header only outside of any external declarationor definition, and shall include the header lexically before the first reference to any of the entities it declaresor first definesin that translation unit.
Rationale (October, 2004):
CWG felt that readers might misunderstand “declaration” as meaning “non-definition declaration.”
According to 9.1 [dcl.pre] paragraph 3,
In a simple-declaration, the optional init-declarator-list can be omitted only when declaring a class (Clause 11 [class]) or enumeration (9.7.1 [dcl.enum]), that is, when the decl-specifier-seq contains either a class-specifier, an elaborated-type-specifier with a class-key (11.3 [class.name]), or an enum-specifier.
This does not allow for the new simplified friend declaration syntax (11.8.4 [class.friend] paragraph 3), which permits the forms
Rationale (May, 2014):
The friend specifier can only appear in a member-declaration, which contains a member-declarator-list, not an init-declarator-list.
Subclause 9.1 [dcl.pre] paragraph 1 defines simple-declaration as:
simple-declaration : decl-specifier-seq init-declarator-listopt ; attribute-specifier-seq decl-specifier-seq init-declarator-list ; ...
However, 9.1 [dcl.pre] paragraph 2 then refers to a simple-declaration using a different producction:
A simple-declaration or nodeclspec-function-declaration of the formattribute-specifier-seqopt decl-specifier-seqopt init-declarator-listopt ;is divided into three parts...
It appears that the latter redefines the grammar non-terminal simple-declaration in an inconsistent way.
Rationale (April, 2017):
The unification of the “in the form” pattern is confusing, so the question was based on a misunderstanding of the text.
11.5 [class.union] paragraph 3 implies that anonymous unions in unnamed namespaces need not be declared static (it only places that restriction on anonymous unions "declared in a named namespace or in the global namespace").
However, 9.2.2 [dcl.stc] paragraph 1 says that "global anonymous unions... shall be declared static." This could be read as prohibiting anonymous unions in unnamed namespaces, which are the preferred alternative to the deprecated use of static.
Rationale (10/99): An anonymous union in an unnamed namespace is not "a global anonymous union," i.e., it is not a member of the global namespace.
9.2.2 [dcl.stc] paragraph 7 seems out of place in the current organization of the Standard:
The linkages implied by successive declarations for a given entity shall agree. That is, within a given scope, each declaration declaring the same variable name or the same overloading of a function name shall imply the same linkage. Each function in a given set of overloaded functions can have a different linkage, however. [Example:...
The preceding two paragraphs on static and extern simply defer to 6.6 [basic.link] to describe their interaction with linkage, so it seems appropriate for this paragraph to move there as well so that all the information on linkage is in one place.
Rationale (May, 2015):
The material in 6.6 [basic.link] deals with linkage concepts, while 9.2.2 [dcl.stc] is concerned with the syntactic constructs in a program that result in the linkages described in 6.6 [basic.link]. CWG felt that the referenced paragraph falls more into the latter category than the former.
In 9.2.3 [dcl.fct.spec], para. 3, the following sentence
A function defined within a class definition is an inline function.
should, if I am not mistaken, instead be:
A function defined within a class declaration is an inline function."
Notes from October 2002 meeting:
This is not a defect. Though there is a long history, going back to the ARM, of use of the term "class declaration" to mean the definition of the class, we believe "class definition" is clearer. We have opened issue 379 to deal with changing all other uses of "class declaration" to "class definition" where appropriate.
A customer reports that when he attempts to replace ::operator new with a user-defined function, the standard library calls the default function by preference if the user-defined function is inline. I believe that our compiler is correct, and that such a replacement function isn't allowed to be inline, but I'm not sure there's sufficiently explicit language in the standard.
In general, of course, the definition of an inline function must be present in every translation unit where the function is called. (9.2.3 [dcl.fct.spec], par 4) It could be argued that this requirement doesn't quite address replacement functions: what we're dealing with is the odd case where we've already got one definition and the user is supplying a different one. I'd like to see something specifically addressing the case of a replacement function.
So what do we have? I see discussion of requirement for a replacement ::operator new in three places: 16.4.5.6 [replacement.functions], 17.6.3.2 [new.delete.single] par 2, and 6.7.5.5 [basic.stc.dynamic] par 2-3. I don't see anything explicitly saying that the replacement function may not be inline. The closest I can find is 17.6.3.2 [new.delete.single] par 2, which says that "a C++ program may define a function with this function signature that displaces the default version defined by the C++ Standard library". One might argue that "with this function signature" rules out inline, but that strikes me as a slight stretch.
Have I missed anything?
Andrew Koenig: I think you've turned up a problem in 9.2.3 [dcl.fct.spec] paragraph 4. Consider:
// Translation unit 1 #include <iostream> extern void foo(void (*)()); inline void bar() { std::cout << "Hello, world!" << std::endl; } int main() { foo(bar); } // Translation unit 2 void foo(void (*f)()) { (*f)(); }
Are you really trying to tell me that this program is ill-formed because the definition of bar is not available in translation unit 2?
I think not. The actual words in 9.2.3 [dcl.fct.spec] par 4 are
An inline function shall be defined in every translation unit in which it is used...and I think at in this context, ``used'' should be interpreted to mean that foo is used only in translation unit 1, where it is converted to a value of type void(*)().
Notes from October 2003 meeting:
We don't think Andy Koenig's comment requires any action; "used" is already defined appropriately.
We agree that this replacement should not be allowed, but we think it's a library issue (in the rules for allowed replacements). Forwarded to library group; it's issue 404 on the library issues list.
Is the following valid?
template <class T> void f(T) { typedef int x; typedef T x; } int main() { f(1); }
There is an instantiation where the function is valid. Is an implementation allowed to issue an error on the template declaration because the types on the typedef are not the same (9.2.4 [dcl.typedef])?
How about
typedef T x; typedef T2 x;?
It can be argued that these cases should be allowed because they aren't necessarily wrong, but it can also be argued that there's no reason to write things like the first case above, and if such a case appears it's more likely to be a mistake than some kind of intentional test that int and T are the same type.
Notes from the October 2003 meeting:
We believe that all these cases should be allowed, and that errors should be required only when an instance of the template is generated. The current standard wording does not seem to disallow such cases, so no change is required.
The grammar does not allow for a declaration of the form
using T = enum class E : int;
However, it is widely accepted by current implementations. Should the rules be changed to accommodate this usage?
Rationale (November, 2014):
A type-id is intended as a reference to a type, but the opaque enumeration syntax is intended as a declaration, not a reference like an elaborated-type-specifier, so the current rules are as intended.
A constexpr function is required to have literal argument and return types. Consider an example like:
template <class T> struct B { constexpr B(T) { } }; struct A { B<A> b; };
Whether B(A) is constexpr depends on whether A is literal, which depends on whether B<A> is literal, which depends on whether B(A) is constexpr.
Rationale (August, 2011):
The requirements apply to definitions, not declarations.
Consider the following example:
struct A { template <class T> constexpr void* f(T) { return nullptr; } A* ap = (A*)f(A()); template <class ...T> constexpr A() {} };
A default constructor template instance would recurse infinitely via the member initializer for A::ap. However, since it's a template, by 9.2.6 [dcl.constexpr] paragraph 6, that would just mean that the instance shouldn't be treated as constexpr.
Is an implementation really expected to handle that? In effect, we have to try to evaluate the expression and if that fails, nullify the constexpr-ness of the A::A<>() instance, and re-examine the initializer with the new understanding of that instance?
Rationale (April, 2013):
In the cited example, the constructor is constexpr; it simply cannot be used in a constant expression. The error would be detected at the time of such a use.
There does not appear to be language in the current wording stating that constexpr cannot be applied to a variable of volatile-qualified type. Also, the wording in 7.7 [expr.const] paragraph 2 referring to “a non-volatile object defined with constexpr” might lead one to infer that the combination is permitted but that such a variable cannot appear in a constant expression. What is the intent?
Rationale (September, 2013):
The combination is intentionally permitted and could be used in some circumstances to force constant initialization.
Neither 9.2.6 [dcl.constexpr] nor 9.2.2 [dcl.stc] forbids combining the thread_local and constexpr specifiers in the declaration of a variable. Should this combination be permitted?
Rationale (January, 2014):
Such an object could have mutable subobjects. The constexpr specifier guarantees static initialization.
Given an example like:
struct S {
constexpr S(): i(42) { }
~S();
int i;
};
double x[S().i]; // Error
such a constexpr constructor is completely useless, but there doesn't appear to be anything in the current wording making it ill-formed. Should it be?
Rationale (November, 2016):
Such constructors can be useful for guaranteeing static initialization of namespace-scope objects.
11.4 [class.mem] paragraph 2 says,
A class is considered a completely-defined object type (6.8 [basic.types]) (or complete type) at the closing } of the class-specifier. Within the class member-specification, the class is regarded as complete within function bodies, default arguments, and exception-specifications (including such things in nested classes). Otherwise it is regarded as incomplete within its own class member-specification.
In particular, the return type of a member function is not listed as a context in which the class type is considered complete; instead, that case is handled as an exception to the general rule in 9.3.4.6 [dcl.fct] paragraph 6 requiring a complete type in the definition of a function:
The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) unless the function definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
These rules have implications for the use of decltype. (The following examples use the not-yet-accepted syntax for specifying the return type of a function after its declarator, but the questions apply to the current syntax as well.) Consider:
struct deduced { int test() { return 0; } auto eval( deduced& d )->decltype( d.test() ) { return d.test(); } };
7.6.1.5 [expr.ref] paragraph 1 requires that the class type of the object or pointer expression in a class member access expression be complete, so this usage is ill-formed.
A related issue is the use of this in a decltype specifier:
struct last_one { int test() { return 0; } auto eval()->decltype( this->test() ) { return test(); } };
_N4868_.11.4.3.2 [class.this] paragraph 1 allows use of this only in the body of a non-static member function, and the return type is not part of the function-body.
Do we want to change the rules to allow these kinds of decltype expressions?
Rationale (February, 2008):
In the other cases where a class type is considered complete within the definition of the class, it is possible to defer handling the construct until the end of the definition. That is not possible for types, as the type may be needed immediately in subsequent declarations.
It was also noted that the primary utility of decltype is in generic contexts; within a single class definition, other mechanisms are possible (e.g., use of a member typedef in both the declaration of the operand of the decltype and to replace the decltype itself).
The first bullet of 9.2.9.3 [dcl.type.simple] paragraph 4 says,
There are two clarifications to this specification that would assist the reader. First, it would be useful to have a note highlighting the point that a parenthesized expression is neither an id-expression nor a member access expression.
Second, the phrase “the type of the entity named by e” is unclear as to whether cv-qualification in the object or pointer expression is or is not part of that type. Rephrasing this to read, “the declared type of the entity,” or adding “(ignoring any cv-qualification in the object expression or pointer expression),” would clarify the intent.
Rationale (February, 2008):
The text is clear enough. In particular, both of these points are illustrated in the last two lines of the example contrasting decltype(a->x) and decltype((a->x)): in the former, the expression has no parentheses, thus satisfying the requirements of the first bullet and yielding the declared type of A::x, while the second has parentheses, falling into the third bullet and picking up the const from the object expression in the member access.
Because type deduction for the auto specifier is described in 9.2.9.6 [dcl.spec.auto] paragraph 6 as equivalent to the deduction that occurs in a call to a function template, the adjustment of the argument type from A to A& specified in 13.10.3.2 [temp.deduct.call] paragraph 3 is performed when the initializer is an lvalue. As a result, in the following example, ra has the type A& and not, as might be expected, A&&:
class A { }; void f() { A a; auto&& ra = a; }
It is unclear whether this is surprising enough, and potentially widely-enough used, to warrant making an exception to the current rules to handle this case differently.
Rationale (September, 2008):
It is important that the deduction rules be the same in the function and auto cases. The result of this example might be surprising, but maintaining a consistent model for deduction is more important.
An initializer list is treated differently in deducing the type of an auto specifier and in a function call. In 9.2.9.6 [dcl.spec.auto] paragraph 6, an initializer list is given special treatment so that auto is deduced as a specialization of std::initializer_list:
Once the type of a declarator-id has been determined according to 9.3.4 [dcl.meaning], the type of the declared variable using the declarator-id is determined from the type of its initializer using the rules for template argument deduction. Let T be the type that has been determined for a variable identifier d. Obtain P from T by replacing the occurrences of auto with either a new invented type template parameter U or, if the initializer is a braced-init-list (9.4.5 [dcl.init.list]), with std::initializer_list<U>. The type deduced for the variable d is then the deduced A determined using the rules of template argument deduction from a function call (13.10.3.2 [temp.deduct.call]), where P is a function template parameter type and the initializer for d is the corresponding argument.
In a function call, however, an initializer-list argument is a non-deduced context:
Template argument deduction is done by comparing each function template parameter type (call it P) with the type of the corresponding argument of the call (call it A) as described below. If removing references and cv-qualifiers from P gives std::initializer_list<P'> for some P' and the argument is an initializer list (9.4.5 [dcl.init.list]), then deduction is performed instead for each element of the initializer list, taking P' as a function template parameter type and the initializer element as its argument. Otherwise, an initializer list argument causes the parameter to be considered a non-deduced context (13.10.3.6 [temp.deduct.type]). [Example:
template<class T> void f(std::initializer_list<T>); f({1,2,3}); // T deduced to int f({1,"asdf"}); // error: T deduced to both int and const char* template<class T> void g(T); g({1,2,3}); // error: no argument deduced for T
This seems inconsistent, but it is not clear in which direction the inconsistency should be resolved. The use of an initializer list in a range-based for is an argument in favor of the 9.2.9.6 [dcl.spec.auto] treatment, but the utility of this deduction in other contexts is not apparent.
Rationale (October, 2012):
CWG felt that this language design question would be better considered by EWG.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 109.
The treatment of a declaration like the following is not clear:
auto (*f())() -> int; // #1
9.3.4.6 [dcl.fct] paragraph 2 appears to require determining the type of the nested declarator
auto (*f()); // #2
which, because it does not have a trailing-return-type, would be ill-formed by (C++11) 9.2.9.6 [dcl.spec.auto]. (In C++14, an auto return type without a trailing-return-type is, of course, permitted.)
Rationale (September, 2013):
The intent of the C++11 wording is that the requirement for a trailing return type applies only at the top level of the declarator to which auto applies, not to each possible recursive stage in the declarator processing. Also, as noted, the issue becomes moot with the changes enabling return type deduction.
Paper N3922 changed the rules for deduction from a braced-init-list containing a single expression in a direct-initialization context. Should a corresponding change be made for decltype(auto)? E.g.,
auto x8a = { 1 }; // decltype(x8a) is std::initializer_list<int> decltype(auto) x8d = { 1 }; // ill-formed, a braced-init-list is not an expression auto x9a{ 1 }; // decltype(x9a) is int decltype(auto) x9d{ 1 }; // decltype(x9d) is int
See also issue 1467, which also effectively ignores braces around a single expression, this change would be parallel to that one, even though the primary motivation for delctype(auto) is in the return type of a forwarding function, where direct-initialization does not apply.
Rationale (November, 2014):
CWG felt that this was a question of language design and thus more properly dealt with by EWG.
EWG 2022-11-11
This is a request for a new feature, which should be proposed in a paper to EWG.
The Standard does not indicate whether an explicit specialization of a function template can have a deduced return type. It seems a bit too much to require parsing the entire function body in order to tell which template is being specialized. In extreme cases, that could mean deferring access checks for the entire body of the function.
Rationale (May, 2015):
An explicit specialization with a deduced return type can only match a template declared with a deduced return type, so the actual return type is not needed in order to match the explicit specialization with the template being specialized.
According to 9.2.9.6.1 [dcl.spec.auto.general] paragraph 15,
A function declared with a return type that uses a placeholder type shall not be a coroutine (9.5.4 [dcl.fct.def.coroutine]).
This should also apply to coroutine lambdas.
Rationale (July, 2020):
No change is needed. The restriction applies to functions, and the lambda's operator() is a function.
Do we really need the & ref-qualifier? We could get the same behavior without it if we relaxed the restriction on ref-qualified and non-ref-qualified overloads in the same set:
with the & ref-qualifier | without the & ref-qualifier |
struct S { void f(); }; |
struct S { void f(); }; |
struct S { void f() &; }; |
struct S { void f(); void f() && = delete; }; |
struct S { void f() &&; }; |
struct S { void f() &&; }; |
struct S { void f() &; void f() &&; }; |
struct S { void f(); void f() &&; }; |
The main objection I can see to this change is that we would lose the notational convenience of the & ref-qualifier, which would need to be replaced by a pair of declarations. We might overcome this by still allowing a single & on a function (although it would not be a ref-qualifier) as a synonym to a non-ref-qualified declaration plus a deleted ref-qualified declaration.
The biggest asymmetry between the implicit object parameter and regular parameters is not in reference binding but in type deduction. Consider:
template <class R, class C, class A> void f(R (C::*p)(A));
With these members:
struct S { void mv(std::string); void mr(std::string&); void ml(std::string&&); };
then
f(&S::mv); // deduces A = string f(&S::mr); // deduces A = string& f(&S::ml); // deduces A = string&&
On the other hand, with these members:
struct S { void mv(std::string); void mr(std::string) &; void ml(std::string) && };
then
f(&S::mv); // deduces C = S f(&S::mr); // illegal f(&S::ml); // illegal
To make template f work with any pointer to member function, I need three overloads of f. Add cv-qualifiers and it's twelve overloads!
And then there is the interaction with concepts. Consider this type:
struct Value { Value& operator=(const Value&) &; };
Is it, say, Regular? If so, will the following compile, and what is the outcome?
template <Regular T> void f() { T() = T(); } void g() { f<Value>(); }
If Value is not Regular, that is a good motivation to avoid ever using & ref-qualifiers on operator= (and probably on any member functions).
If Value is Regular, then either f<Value>() doesn't compile, violating one of the principal motivations for concepts, or it calls Value::operator= on an rvalue, which was explicitly prohibited.
Rationale, March, 2009:
The CWG did not feel that the suggested change was a signficant improvement over the existing specification.
The production for parameters-and-qualifiers is long and will be even longer with the changes for the Transactional Memory TS. It might be beneficial to refactor it into more manageable chunks.
Rationale (May, 2015):
CWG felt that recent changes to the grammar are sufficient.
In deciding whether a construct is an object declaration or a function declaration, 9.3.3 [dcl.ambig.res] contains the following gem: "In that context, the choice is between a function declaration [...] and an object declaration [...] Just as for the ambiguities mentioned in 8.9 [stmt.ambig], the resolution is to consider any construct that could possibly be a declaration a declaration."
To what declaration do the last two "declarations" refer? Object, function, or (following from the syntax) possibly parameter declarations?
Notes from the 4/02 meeting:
This is not a defect. Section 9.3.3 [dcl.ambig.res] reads:
The ambiguity arising from the similarity between a function-style cast and a declaration mentioned in 8.9 [stmt.ambig] can also occur in the context of a declaration. In that context, the choice is between a function declaration with a redundant set of parentheses around a parameter name and an object declaration with a function-style cast as the initializer. Just as for the ambiguities mentioned in 8.9 [stmt.ambig], the resolution is to consider any construct that could possibly be a declaration a declaration.
The wording "any construct" in the last sentence is not limited to top-level constructs. In particular, the function declaration encloses a parameter declaration, whereas the object declaration encloses an expression. Therefore, in case of ambiguity between these two cases, the declaration is parsed as a function declaration.
Consider the following program:
struct Point { Point(int){} }; struct Lattice { Lattice(Point, Point, int){} }; int main(void) { int a, b; Lattice latt(Point(a), Point(b), 3); /* Line X */ }
The problem concerns the line marked /* Line X */, which is an ambiguous declarations for either an object or a function. The clause that governs this ambiguity is 9.3.3 [dcl.ambig.res] paragraph 1, and reads:
The ambiguity arising from the similarity between a function-style cast and a declaration mentioned in 8.9 [stmt.ambig] can also occur in the context of a declaration. In that context, the choice is between a function declaration with a redundant set of parentheses around a parameter name and an object declaration with a function-style cast as the initializer. Just as for the ambiguities mentioned in 8.9 [stmt.ambig], the resolution is to consider any construct that could possibly be a declaration a declaration. [Note: a declaration can be explicitly disambiguated by a nonfunction-style cast, by a = to indicate initialization or by removing the redundant parentheses around the parameter name. ]
Based on this clause there are two possible interpretations of the declaration in line X:
Note that the last sentence before the "[Note:" is not much help, because both options are declarations.
Steve Adamczyk: a number of people replied to this posting on comp.std.c++ saying that they did not see a problem. The original poster replied:
I can't do anything but agree with your argumentation. So there is only one correct interpretation of 9.3.3 [dcl.ambig.res] paragraph 1, but I have to say that with some rewording, the clause can be made a lot clearer, like stating explicitly that the entire declaration must be taken into account and that function declarations are preferred over object declarations.
I would like to suggest the following as replacement for the current 9.3.3 [dcl.ambig.res] paragraph 1:
The ambiguity arising from the similarity between a functionstyle cast and a declaration mentioned in 8.9 [stmt.ambig] can also occur in the context of a declaration. In that context, the choice is between a function declaration with a redundant set of parentheses around a parameter name and an object declaration with a function-style cast as the initializer. The resolution is to consider any construct that could possibly be a function declaration a function declaration. [Note: To disambiguate, the whole declaration might have to be examined to determine if it is an object or a function declaration.] [Note: a declaration can be explicitly disambiguated by a nonfunction-style cast, by a = to indicate initialization or by removing the redundant parentheses around the parameter name. ]
Notes from the 4/02 meeting:
The working group felt that the current wording is clear enough.
In an example like,
namespace N { enum E { X }; } struct S { S(N::E); }; S s(S(N::X));
the last line disambiguates as an (ill-formed) function declaration, because the restriction requiring unqualified parameter names is semantic, not syntactic. Should the language be changed to use the presence of a qualified-id in this case as disambiguation? There is implementation divergence in the handling of this example.
Rationale (June, 2014):
CWG noted that the grammar change to allow disambiguation based on the parameter name being qualified is large, so the cost outweighs the relatively small benefit for disambiguating this particular corner case.
The disambiguation of a fragment like
(T())*x
where T is a type and x is a variable, is unclear. Is it a cast to type T() of the expression *x, or is it a binary operator * multiplying a value-initialized T by x? Current implementations treat it as the former, which is not helpful since the specified type is a function type and thus always ill-formed.
Rationale (November, 2014):
According to 9.3.3 [dcl.ambig.res], T() is to be taken as a function type, so the cast interpretation is correct, and one of the examples in this section is very nearly exactly this case.
Consider the following example:
struct S { virtual void v() = 0; }; void f(S sa[10]); // permitted?
9.3.4.5 [dcl.array] paragraph 1 says that a declaration like that of sa is ill-formed:
T is called the array element type; this type shall not be a reference type, the (possibly cv-qualified) type void, a function type or an abstract class type.
On the other hand, 9.3.4.6 [dcl.fct] paragraph 3 says that the type of sa is adjusted to S*, which would be permitted:
The type of each parameter is determined from its own decl-specifier-seq and declarator. After determining the type of each parameter, any parameter of type “array of T” or “function returning T” is adjusted to be “pointer to T” or “pointer to function returning T,” respectively.
It is not clear whether the parameter adjustment trumps the prohibition on declaring an array of an abstract class type or not. Implementations differ in this respect: EDG 2.4.2 and MSVC++ 7.1 reject the example, while g++ 3.3.3 and Sun Workshop 8 accept it.
Rationale (April, 2005):
The prohibition in 9.3.4.5 [dcl.array] is absolute and does not allow for exceptions. Even though such a type in a parameter declaration would decay to an allowed type, the prohibition applies to the type before the decay.
This interpretation is consistent with the resolution of issue 337, which causes template type deduction to fail if such types are deduced. It was also observed that pointer arithmetic on pointers to abstract classes is very likely to fail, and the fact that the programmer used array notation to declare the pointer type is a strong indication that he/she expected to use subscripting.
According to 9.3.4.5 [dcl.array] paragraph 1,
In a declaration T D where D has the form
D1 [ constant-expressionopt ] attribute-specifieropt
and the type of the identifier in the declaration T D1 is “derived-declarator-type-list T”, then the type of the identifier of D is an array type; if the type of the identifier of D contains the auto type-specifier, the program is ill-formed.
This has the effect of prohibiting a declaration like
int v[1]; auto (*p)[1] = &v;
This restriction is unnecessary and presumably unintentional.
Note also that the statement that “the type of the identifier of D is an array type” is incorrect when the nested declarator is not simply a declarator-id. A similar problem exists in the wording of 9.4.4 [dcl.init.ref] paragraph 3 for function types.
Rationale (March, 2011):
The functionality of the auto specifier was intentionally restricted to simple cases; supporting complex declarators like this was explicitly discussed and rejected when the feature was adopted.
The runtime check for violating the maximum size of a stack-based array object is ill-advised. Many implementations cannot easily determine the available stack space, and checking against a fixed limit is not helpful.
Proposed resolution (September, 2013):
Change 9.3.4.5 [dcl.array] paragraph 1 as follows:
...The expression is erroneous if:
its value before converting...
its value is such that the size of the allocated object would exceed the implementation-defined limit for the maximum size of an object (Annex Clause Annex B [implimits]);
...
...If the expression is erroneous, an exception of a type that would match a handler (14.4 [except.handle]) of type std::bad_array_length (_N3690_.18.6.2.2 [bad.array.length]) is thrown [Footnote: Implementations are encouraged also to throw such an exception if the size of the object would exceed the remaining stack space. —end footnote].
This resolution also resolves issue 1675.
C-style variable-length arrays (which have been widely implemented as extensions to C++) permit a zero-length array. Similarly, arrays created by new-expressions can have a length of zero. Forbidding zero-length arrays of runtime bound is a gratuitous incompatibility.
Proposed resolution (September, 2013):
Change 9.3.4.5 [dcl.array] paragraph 1 as follows:
...The expression is erroneous if:
its value before converting to std::size_t or, in the case of an expression of class type, before application of the second standard conversion (12.2.4.2.3 [over.ics.user]) is less than
or equal tozero;...
If the expression, after converting to std::size_t, is a core constant expression and the expression is erroneous or its value is zero, the program is ill-formed. If the expression... std::bad_array_length (_N3690_.18.6.2.2 [bad.array.length]) is thrown.
An object of array typeIf N is zero, an object of array type has no elements. Otherwise, it contains a contiguously allocated non-empty set of N subobjects of type T. The type...
Rationale (February, 2014):
The specification was removed from the WP and moved into a Technical Specification.
9.3.4.6 [dcl.fct] paragraph 2 says:
If the parameter-declaration-clause is empty, the function takes no arguments. The parameter list (void) is equivalent to the empty parameter list.Can a typedef to void be used instead of the type void in the parameter list?
Rationale: The IS is already clear that this is not allowed.
There doesn't appear to be an explicit prohibition of a function declaration of the form
auto f() -> decltype(f());
Presumably there should be.
Rationale (February, 2012):
As noted in issue 1433, the point of declaration of the function name is after the complete declarator, i.e., after the trailing return type, so the recursion posited in this issue cannot occur.
Paragraph 9 of says that extra default arguments added after a using-declaration but before a call are usable in the call, while 9.9 [namespace.udecl] paragraph 9 says that extra function overloads are not. This seems inconsistent, especially given the similarity of default arguments and overloads.
Rationale (10/99): The Standard accurately reflects the intent of the Committee.
Consider:
struct A { int i; A() { void foo(int=i); } };
It's not clear whether that is well-formed or not. It uses this, which might be thought of as a kind of parameter or local variable, which would make the default argument ill-formed. On the other hand, there doesn't seem to be a good reason to ban the code, either.
Rationale (February, 2012):
9.3.4.7 [dcl.fct.default] paragraphs 8-9 should be interpreted as making the example ill-formed.
The resolution of issue 1214 makes it ill-formed to use an initializer of the form ({...}) with a variable of a non-class type. This can cause problems with a mem-initializer of the form
constexpr cond_variable() : cond(PTHREAD_COND_INITIALIZER) {}
If pthread_cond_t is an array, PTHREAD_COND_INITIALIZER will be a braced-init-list and the mem-initializer will be ill-formed.
Rationale (August, 2011):
A non-static data member initializer can be used in this case.
The semantics of a parenthesized braced-init-list are not clear, whether appearing as a mem-initializer or standalone.
Rationale (February, 2012):
CWG feels that the semantics are sufficiently clear without any changes to the current wording.
The resolution of issue 1301 changed the status of T{}, where T is an aggregate, from being value-initialization to being aggregate initialization. This change breaks the description of DefaultConstructible in 16.4.4.2 [utility.arg.requirements] Table 19. LWG has opened an issue for this (2170) but would like CWG to consider a core approach that would categorize T{} as value initialization, even when T is an aggregate.
Rationale (April, 2013):
There is a distinction in the core language between aggregate initialization and value initialization. For example, a class with a deleted default constructor can be list-initialized via aggregate initialization but not value-initialized.
The current wording of the WP allows aggregate initialization of parameters in function calls. For example, 12.2.4.2.6 [over.ics.list] paragraph 4 reads:
Otherwise, if the parameter has an aggregate type which can be initialized from the initializer list according to the rules for aggregate initialization (9.4.2 [dcl.init.aggr]), the implicit conversion sequence is a user-defined conversion sequence. [Example:
struct A { int m1; double m2; }; void f(A); f( {'a', 'b'} ); // OK: f(A(int,double)) user-defined conversion f( {1.0} ); // error: narrowing—end example]
The rules for aggregate initialization in 9.4.2 [dcl.init.aggr] paragraph 11 allow braces to be elided in the initializer
In a declaration of the form
T x = { a };
It is not clear whether this phrasing should be interpreted as allowing brace elision only in a simple-declaration and thus not in a function argument or whether this restriction is inadvertent and should be removed.
Rationale (November, 2010):
The restriction is intentional. Support for aggregate initialization is principally intended for legacy code and C compatibility, not for code written using the new facilities of the language.
Existing practice appears to be to allow C++03-style aggregate initialization from a parenthesized string literal, e.g.,
struct S { char arr[4]; } s = {("abc")};
This should be standardized, to allow examples like
struct S { char arr[4]; }; void f(S); void g() { f({("abc")}); }
Rationale (October, 2012):
CWG agreed that this is already permitted by virtue of _N4567_.5.1.1 [expr.prim.general] paragraph 6.
The correct interpretation of an example like the following is not clear:
struct A { int x[] = { 0 }; };
Should the initializer be considered as implicitly determining the omitted array bound?
Rationale (November, 2014):
The requirement for determining an omitted bound in an aggregate is that it be “initialized” (9.4.5 [dcl.init.list] paragraph 4); since the brace-or-equal-initializer might, in fact, be ignored in some or all uses of the class, it should not be considered as definitively initializing the member and thus does not determine the array bound. Clarification of this intent could be done editorially, but CWG felt that no normative change was required.
There is implementation divergence with respect to an example like:
constexpr int f(int &r) { r *= 9; return r - 12; }
struct A { int &&temporary; int x; int y; };
constexpr A a1 = { 6, f(a1.temporary), a1.temporary }; // #1
Some implementations accept this code and others say that a1.temporary is not a constant expression in the initializer at #1.
Rationale (July, 2019)
The example is valid; the constant evaluation is the entire initialization constexpr A a1, thus the temporary bound to a1.temporary started its lifetime within the constant evaluation.
In section 9.4.4 [dcl.init.ref], paragraph 5, there is following note:
Note: the usual lvalue-to-rvalue (4.1), array-to-pointer (4.2), and function-to-pointer (4.3) standard conversions are not needed, and therefore are suppressed, when such direct bindings to lvalues are done.
I believe that this note is misleading. There should be either:
The problem:
int main() { const int ci = 10; int * pi = NULL; const int * & rpci = pi; rpci = &ci; *pi = 12; // circumvent constness of "ci" }
int main() { int * pi = NULL; const int * const & rcpci = pi; // 1 int i = 0; pi = &i; // 2 if (pi == rcpci) std::cout << "bound to lvalue" << std::endl; else std::cout << "bound to temporary rvalue" << std::endl; }
There has been discussion on this issue on comp.lang.c++.moderated month ago, see http://groups.google.pl/groups?threadm=9bed99bb.0308041153.1c79e882%40posting.google.com and there seems to be some confusion about it. I understand that note is not normative, but apparently even some compiler writers are misled (try above code snippets on few different compilers, and using different compilation options - notably GCC 3.2.3 with -Wall -pedantic), thus it should be cleared up.
My proposal is to change wording of discussed note to:
Note: result of every standard conversion is never an lvalue, and therefore all standard conversions (clause 4) are suppressed, when such direct bindings to lvalues are done.
Rationale (April, 2005):
As acknowledged in the description of the issue, the referenced text is only a note and has no normative impact. Furthermore, the examples cited do not involve the conversions mentioned in the note, and the normative text is already sufficiently clear that the types in the examples are not reference-compatible.
According to the logic in 9.4.4 [dcl.init.ref] paragraph 5, the following example should create a temporary array and bind the reference to that temporary:
const char (&p)[10] = "123";
That is presumably not intended (issue 450 calls a similar outcome for rvalue arrays “implausible”). Current implementations reject this example.
Rationale (August, 2010):
The Standard does not describe initialization of array temporaries, so a program that requires such is ill-formed.
Note (October, 2010):
Although in general an object of array type cannot be initialized from another object of array type, there is special provision in 9.4.3 [dcl.init.string] for doing so when the source object is a string literal, as in this example. The issue is thus being reopened for further consideration in this light.
Notes from the November, 2010 meeting:
The CWG agreed that the current wording appears to permit this example but still felt that array temporaries are undesirable. Wording should be added to disallow this usage.
Proposed resolution (November, 2010):
Change 9.4.4 [dcl.init.ref] paragraph 5 as follows:
...
If the initializer expression is a string literal (5.13.5 [lex.string]), the program is ill-formed.
Otherwise, a temporary of type...
(See also issue 1232, which argues in favor of allowing array temporaries.)
Rationale (March, 2011):
In consideration of the arguments made in issue 1232, CWG agreed to allow array temporaries and there is thus no reason to prohibit them in this case.
As described in the “additional note, January, 2012” in issue 1287, questions were raised regarding the treatment of class prvalues in the original proposed resolution and the proposed resolution was revised (February, 2012) to address those concerns. The revised resolution raised its own set of concerns with regard to slicing and performance, however, and the issue was moved back to "review" status to allow further discussion.
At the April, 2013 meeting, it was decided to proceed with the original resolution of issue 1287 and split off the concerns regarding class prvalues into this issue.
Notes from the September, 2013 meeting:
The resolution for issue 1604 results in indirect binding to a subobject and will no longer cause slicing.
Rationale (November, 2013):
CWG determined that, in light of the resolution of issue 1604, no further change was necessary.
The current wording of the Standard appears to permit code like
void f(const char (&)[10]); void g() { f("123"); f({'a','b','c','\0'}); }
creating a temporary array of ten elements and binding the parameter reference to it. This is controversial and should be reconsidered. (See issues 1058 and 1232.)
Rationale (March, 2016):
Whether to support creating a temporary array in such cases is a question of language design and thus should be considered by EWG.
EWG 2022-11-11
The intent is adequately expressed in the specification.
The exposition of list initialization using an array in 9.4.5 [dcl.init.list] paragraph 4 raises the question of whether an empty initializer list is permitted, as declaration of an array with a zero bound is ill-formed.
Rationale (October, 2009):
The description is intended as an aid to understanding the concepts, not as a literal transformation that is performed. An implementation is permitted to allocate a zero-length array, even if such as array cannot be decclared (e.g., via a new-expression).
Consider the example,
struct A { char c; }; void f (char d) { A a = { d + 1 }; }
This code is now ill-formed because of the narrowing conversion from the int result type of the addition, not because of any real narrowing. This seems like an embarrassment for C++0x. It would be better not to get an error about any arithmetic involving non-constant operands just because it might overflow with some values.
Rationale (November, 2010):
The CWG agreed that this behavior is unfortunate but felt that it would be too difficult to formulate a satisfactory set of rules for handling complex expressions correctly for a small gain in utility (the user can simply add a cast in order to avoid the error).
The Standard does not specify whether std::initializer_list may be an aggregate or not. Strictly speaking, the order of the bullets in 9.4.5 [dcl.init.list] paragraph 3 depends on the answer. The existence of a constructor declaration in 17.10 [support.initlist] suggests that it is not an aggregate but does not say so definitively.
Rationale (February, 2012):
The presence of the constructor declaration in 17.10 [support.initlist] is sufficient to establish that std::initializer_list is not an aggregate.
Issue 1030 clarified that elements of an initializer-list are evaluated in the order they are written, but does that also apply to implied expressions? That is, given:
struct A { A(); ~A(); }; struct B { B(int, const A& = A()); ~B(); }; struct C { B b1, b2; }; int main() { C{1,2}; }
Do we know that the first B is constructed before the second A? I suppose that's what we want, even though it complicates exception region nesting since the As need to live longer than the B subobject cleanups.
Rationale (October, 2012):
Because this is an expression, not a declaration, the As live until the end of the full-expression.
Dealing with aggregate-initialized temporaries has been a bit of a headache because unlike aggregate initialization of variables, each element initialization is not a full-expression, so various things behave differently because they are in the context of initializing a temporary.
This can either be inconsistent with aggregate initialization of a variable (in which each element is a full-expression) or inconsistent with list-initialization via constructor (in which each element is a subexpression).
Rationale (October, 2012):
The rules are acceptable as written; declaration and expression contexts are different.
The definition of a “narrowing conversion” in 9.4.5 [dcl.init.list] paragraph 7 is couched in terms of the type of the target. A conversion to a too-small bit-field should presumably also be categorized as a narrowing conversion. (See also issue 1449.)
Additional note (August, 2012):
It was observed that the proposed narrowing error, unlike in other contexts, cannot be circumvented by adding a cast. The only way to avoid a narrowing error would be to avoid using the brace syntax or to mask the value to an appropriate width. Even the latter approach could conceivably require an implementation to track the maximum number of bits needed by operations applied on top of the masked value, unless the masking were required to be at the top level of the initializer expression.
Rationale (October, 2012):
CWG felt that this was more of a language design question and would be better considered by EWG.
Rationale (February, 2014):
EWG determined that no action should be taken on this issue.
The specification of list-initialization in 9.4.5 [dcl.init.list] paragraph 3 has a bullet that reads,
Otherwise, if the initializer list has a single element of type E and either T is not a reference type or its referenced type is reference-related to E, the object or reference is initialized from that element
It is not clear what is meant by being “ initialized from the element.” If one assumes that it means “go back to 9.4 [dcl.init] and follow the logic ladder there with the element,” the logical result is that an initializer for a scalar could be arbitrarily deeply nested in braces, with each trip through the 9.4 [dcl.init] / 9.4.5 [dcl.init.list] recursion peeling off one layer. Presumably that is not intended.
Rationale (October, 2012):
The wording “a single element of type E” excludes the case of a nested braced initializer, because such an element has no type.
If an initializer_list object is copied and the copy is elided, is the lifetime of the underlying array object extended? E.g.,
void f() {
std::initializer_list<int> L =
std::initializer_list<int>{1, 2, 3}; // Lifetime of array extended?
}
The current wording is not clear.
(See also issue 1299.)
Notes from the October, 2012 meeting:
The consensus of CWG was that the behavior should be the same, regardless of whether the copy is elided or not.
Rationale (November, 2016):
With the adoption of paper P0135R1, there is no longer any copy in this example to be elided.
The resolution of issue 1467 now allows for initialization of aggregate classes from an object of the same type. Similar treatment should be afforded to array aggregates.
Notes from the June, 2014 meeting:
This is a request for extended language facilities and thus should be evaluated by EWG.
EWG 2022-11-11
This is a request for a new feature that should be proposed in a paper to EWG.
According to 9.4.5 [dcl.init.list] bullet 7.3, an implicit conversion
from an integer type or unscoped enumeration type to a floating-point type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type
is a narrowing conversion. There does not seem to be a good reason why a conversion from, for example, an unsigned char value to a floating point value should be considered to be narrowing, since floating point types should be able represent all the values.
Rationale (November, 2014):
CWG felt that type-based (in contrast to value-based) restrictions such as this should not depend on the platform-specific characteristics of the type, so the general rule should apply.
According to 9.5.2 [dcl.fct.def.default] paragraph 2,
An explicitly-defaulted function may be declared constexpr only if it would have been implicitly declared as constexpr...
This is relevant for wrapper functions like
template<class T> struct wrap { T t; constexpr wrap() = default; constexpr wrap(const wrap&) = default; };
It is not clear how the new wording for constexpr member functions of class templates in the proposed resolution issue 1358 affects this:
If the instantiated template specialization of a constexpr function template or member function of a class template would fail to satisfy the requirements for a constexpr function or constexpr constructor, that specialization is still a constexpr function or constexpr constructor, even though a call to such a function cannot appear in a constant expression.
Rationale (April, 2013):
The specification is as intended. The defaulted constructor will be constexpr if it can be, so it should not be explicitly declared constexpr in order to avoid the problems mentioned.
It would seem intuitively that a deleted function cannot throw an exception, but 9.5.3 [dcl.fct.def.delete] does not mention that. This could conceivably be useful in SFINAE contexts.
Rationale (November, 2010):
Any reference to a deleted function is ill-formed, so it doesn't really matter whether they are noexcept or not.
Whether or not structured bindings can be captured by a lambda and, if so, with what semantics, is unclear from the current wording.
Rationale (March, 2018):
Paper P0588R1, adopted at the October, 2017 meeting, answers the question by explicitly prohibiting such captures.
Although in most contexts “= expression” can be replaced by “{ expression }”, enumerator-definitions accept only the “=” form. This could be surprising.
Additional note (October, 2009):
The Committee may wish to consider default arguments in this light as well.
Rationale (August, 2010):
This suggestion was considered and rejected by EWG.
The text of 9.7.1 [dcl.enum] paragraph 2 explicitly forbids unnamed scoped enumerations:
The optional identifier shall not be omitted in the declaration of a scoped enumeration.
There does not appear to be a good rationale for this restriction since a typedef name can be used to name the enumerators. It is also inconsistent with similar constructs. For example,
typedef enum class { e } E; E x = E::e;
is ill-formed, but
typedef struct { enum { s }; } S; int y = S::s;
is well-formed.
Rationale (August, 2011):
The use of typedef names for linkage purposes is intended for C compatibility and should not be extended to features that are not part of the C subset of C++.
9.3.4 [dcl.meaning] paragraph 1 and Clause 11 [class] paragraph 11 prohibit decltype-qualified declarators and class names, respectively. There is no such prohibition in 9.7.1 [dcl.enum] for enumeration names. Presumably that is an oversight that should be rectified.
Rationale (February, 2021):
The resolution of issue 2156 includes the required prohibition.
I received an inquiry/complaint that you cannot re-open a namespace using a qualified name. For example, the following program is ok, but if you uncomment the commented lines you get an error:
namespace A { namespace N { int a; } int b; namespace M { int c; } } //namespace A::N { // int d; //} namespace A { namespace M { int e; } } int main() { A::N::a = 1; A::b = 2; A::M::c = 3; // A::N::d = 4; A::M::e = 5; }
Andrew Koenig: There's a name lookup issue lurking here. For example:
int x; namespace A { int x; namespace N { int y; }; } namespace A::N { int* y = &x; // which x? }
Jonathan Caves: I would assume that any rule would state that:
namespace A::B {would be equivalent to:
namespace A { namespace B {so in your example 'x' would resolve to A::x
BTW: we have received lots of bug reports about this "oversight".
Lawrence Crowl: Even worse is
int x; namespace A { int x; } namespace B { int x; namespace ::A { int* y = &x; } }I really don't think that the benefits of qualified names here is worth the cost.
Notes from April 2003 meeting:
We're closing this because it's on the Evolution working group list.
Current implementations reject an example like:
namespace X { int n; } namespace A = X; namespace { namespace A = X; } int k = A::n;
This seems curious, since a similar example with using-declarations or with alias-declarations is valid.
Rationale (November, 2014):
The current wording of the Standard makes this example ambiguous, and CWG did not find the similarities mentioned compelling enough to warrant a change.
Daveed Vandevoorde : While reading Core issue 11 I thought it implied the following possibility:
template<typename T> struct B { template<int> void f(int); }; template<typename T> struct D: B<T> { using B<T>::template f; void g() { this->f<1>(0); } // OK, f is a template };
However, the grammar for a using-declaration reads:
and nested-name-specifier never ends in "template".
Is that intentional?
Bill Gibbons :
It certainly appears to be, since we have:
Rationale (04/99): Any semantics associated with the template keyword in using-declarations should be considered an extension.
Notes from the April 2003 meeting:
We decided to make no change and to close this issue as not-a-defect. This is not needed functionality; the example above, for example, can be written with ->template. This issue has been on the issues list for years as an extension, and there has been no clamor for it.
It was also noted that knowing that something is a template is not enough; there's still the issue of knowing whether it is a class or function template.
Additional note (February, 2011):
This issue is being reopened for further consideration after
additional discussion
using T::template X; // ill-formed
for a class template member X of base class T, one could write
template<U> using X = typename T::template X<U>;
Rationale (March, 2011):
There was insufficient motivation for a change at this point.
9.9 [namespace.udecl]s ays,
A using-declaration shall not name a template-id.It is not clear whether this prohibition applies to the entity for which the using-declaration is a synonym or to any name that appears in the using-declaration. For example, is the following code well-formed?
template <typename T> struct base { void bar (); }; struct der : base<int> { using base<int>::bar; // ill-formed ? };
Rationale (10/99): 9.9 [namespace.udecl] paragraph 1 says, "A using-declaration introduces a name..." It is the name that is thus introduced that cannot be a template-id.
According to 9.9 [namespace.udecl] paragraph 17,
The base class members mentioned by a using-declaration shall be visible in the scope of at least one of the direct base classes of the class where the using-declaration is specified.
The rationale for this restriction is not clear and should be reconsidered.
Rationale (November, 2014):
The rule was introduced because the hiding of a base class member by an intermediate derived class is potentially intentional and should not be capable of circumvention by a using-declaration in a derived class. The consensus of CWG preferred not to change the restriction.
Additional note (November, 2020):
The changes in P1787R6, adopted at the November, 2020 meeting, removes the quoted wording, affirming the rationale in a different manner.
Now that the concept of "conditionally-supported" is available (see N1564), perhaps asm should not be required of every implementation.
Rationale (October, 2004):
This is covered in paper N1627. We would like to keep asm as a keyword for all implementations, however, to enhance portability by preventing programmers from inadvertently using it as an identifier.
[Picked up by evolution group at October 2002 meeting.]
How can we write a function template, or member function of a class template that takes a C linkage function as a parameter when the function type depends on one of the template parameter types?
extern "C" void f(int); void g(char); template <class T> struct A { A(void (*fp)(T)); }; A<char> a1(g); // okay A<int> a2(f); // errorAnother variant of the same problem is:
extern "C" void f(int); void g(char); template <class T> void h( void (*fp)(T) ); int main() { h(g); // okay h(f); // error }
Somehow permit a language linkage to be specified as part of a function parameter declaration. i.e.
template <class T> struct A { A( extern "C" void (*fp)(T) ); }; template <class T> void h( extern "C" void (*fp)(T) );Suggested resolution: (Bill Gibbons)
The whole area of linkage needs revisiting. Declaring calling convention as a storage class was incorrect to begin with; it should be a function qualifier, as in:
void f( void (*pf)(int) c_linkage );instead of the suggested:
void f( extern "C" void (*pf)(int) );I would like to keep calling convention on the "next round" issues list, including the alternative of using function qualifiers.
And to that end, I suggest that the use of linkage specifiers to specify calling convention be deprecated - which would make any use of linkage specifiers in a parameter declaration deprecated.
Martin Sebor: 9.11 [dcl.link], paragraph 4 says that "A linkage-specification shall occur only in namespace scope..." I'm wondering why this restriction is necessary since it prevents, among other things, the use of the functions defined <cmath> in generic code that involves function objects. For example, the program below is ill-formed since std::pointer_to_binary_function<> takes a pointer to a function with extern "C++" linkage which is incompatible with the type of the double overload of std::pow.
Relaxing the restriction to allow linkage specification in declarations of typedefs in class scope would allow std::pointer_to_binary_function<> ctor to be overloaded on both types (i.e., extern "C" and extern "C++"). An alternative would be to allow for the linkage specification to be deduced along with the type.
#include <cmath> #include <functional> #include <numeric> int main () { double a[] = { 1, 2, 3 }; return std::accumulate (a, a + 3, 2.0, std::pointer_to_binary_function<double, double, double>(std::pow)); }
Rationale (February, 2014):
EWG determined that no action should be taken on this issue.
Issue 1
9.11 [dcl.link] paragraph 6 says the following:
extern "C" int f(void); namespace A { extern "C" int f(void); }; using namespace A; int i = f(); // Ok because only one function f() or // ill-formedFor name lookup, both declarations of f are visible and overloading cannot distinguish between them. Has the compiler to check that these functions are really the same function or is the program in error?
Rationale: These are the same function for all purposes.
Issue 2
A similar question may arise with typedefs:
// vendor A typedef unsigned int size_t; // vendor B namespace std { typedef unsigned int size_t; } using namespace std; size_t something(); // error?Is this valid because the typedef size_t refers to the same type in both namespaces?
Rationale (04/99): In 9.8.4 [namespace.udir] paragraph 4:
If name lookup finds a declaration for a name in two different namespaces, and the declarations do not declare the same entity and do not declare functions, the use of the name is ill-formed.The term entity applied to typedefs refers to the underlying type or class (6.1 [basic.pre] paragraph 3); therefore both declarations of size_t declare the same entity and the above example is well-formed.
[Picked up by evolution group at October 2002 meeting.]
Steve Clamage: I can't find anything in the standard that prohibits a language linkage on an operator function. For example:
extern "C" int operator+(MyInt, MyInt) { ... }
Clearly it is a bad idea, you could have only one operator+ with "C" linkage in the entire program, and you can't call the function from C code.
Mike Miller: Well, you can't name an operator function in C code, but if the arguments are compatible (e.g., not references), you can call it from C code via a pointer. In fact, because the language linkage is part of the function type, you couldn't pass the address of an operator function into C code unless you could declare the function to be extern "C".
Fergus Henderson: In the general case, for linkage to languages other than C, this could well make perfect sense.
Steve Clamage:
But is it disallowed (as opposed to being stupid), and if so, where in the standard does it say so?
Mike Miller: I don't believe there's a restriction. Whether that is because of the (rather feeble) justification of being able to call an operator from C code via a pointer, or whether it was simply overlooked, I don't know.
Fergus Henderson: I don't think it is disallowed. I also don't think there is any need to explicitly disallow it.
Steve Clamage: I don't think the standard is clear enough on this point. I'd like to see a clarification.
I think either of these two clarifications would be appropriate:
extern "C" T operator+(T,T); // ok extern "C" T operator-(T,T); // ok extern "C" U operator-(U); // error, two extern "C" operator-
Mike Miller: I think the point here is that something like
extern "xyzzy" bool operator<(S&,S&)could well make sense, if language xyzzy is sufficiently compatible with C++, and the one-function rule only applies to extern "C", not to other language linkages. Given that it might make sense to have general language linkages for operators, is it worthwhile to make an exception to the general rule by saying that you can have any language linkage on an operator function except "C" linkage? I don't like exceptions to general rules unless they're very well motivated, and I don't see sufficient motivation to make one here.
Certainly this capability isn't very useful. There are lots of things in C++ that aren't very useful but just weren't worth special-casing out of the language. I think this falls into the same category.
Mike Ball: I DON'T want to forbid operator functions within an extern "C". Rather I want to add operator functions to that sentence in paragraph 4 of 9.11 [dcl.link] which reads
A C language linkage is ignored for the names of class members and the member function type of class member functions.My reason is simple: C linkage makes a total hash of scope. Any "C" functions declared with the same name in any namespace scope are the same function. In other words, namespaces are totally ignored.
This provision was added in toward the end of the standardization process, and was, I thought, primarily to make it possible to put the C library in namespace std. Otherwise, it seems an unwarrented attack on the very concept of scope. We (wisely) didn't force this on static member functions, since it would essentially promote them to the global scope.
Now I think that programmers think of operator functions as essentially part of a class. At least for one very common design pattern they are treated as part of the class interface. This pattern is the reason we invented Koenig lookup for operator functions.
What happens when such a class definition is included, deliberately or not, in an extern "C" declaration? The member operators continue to work, but the non-member operators can suddenly get strange and hard to understand messages. Quite possibly, they get the messages only when combined with other classes in other compilation units. You can argue that the programmer shouldn't put the class header in a linkage delaration in the first place, but I can still find books that recommend putting `extern "C"' around entire header files, so it's going to happen.
I think that including operator functions in the general exclusion from extern "C" doesn't remove a capability, rather it ensurs a capability that programmers already think they have.
Rationale (10/00):
The benefits of creating an exception for operator functions were outweighed by the complexity of adding another special case to the rules.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
[Picked up by evolution group at October 2002 meeting.]
9.11 [dcl.link] paragraph 4 says,
A C language linkage is ignored for the names of class members and the member function types of class member functions.This makes good sense, since C linkage names typically aren't compatible with the naming used for member functions at link time, nor is C language linkage function type necessarily compatible with the calling convention for passing this to a non-static member function.
But C language linkage type (not name) for a static member function is invaluable for a common programming idiom. When calling a C function that takes a pointer to a function, it's common to use a private static member function as a "trampoline" which retrieves an object reference (perhaps by casting) and then calls a non-static private member function. If a static member function can't have a type with C language linkage, then a global or friend function must be used instead. These alternatives expose more of a class's implementation than a static member function; either the friend function itself is visible at namespace scope alongside the class definition or the private member function must be made public so it can be called by a non-friend function.
Suggested Resolution: Change the sentence cited above to:
A C language linkage is ignored for the names of class members and the member function types of non-static class member functions.The example need not be changed because it doesn't involve a static member function.
The following workaround accomplishes the goal of not exposing the class's implementation, but at the cost of significant superstructure and obfuscation:
// foo.h extern "C" typedef int c_func(int); typedef int cpp_func(int); class foo { private: c_func* GetCallback(); static int Callback(int); }; // foo.cpp #include "foo.h" // A local pointer to the static member that will handle the callback. static cpp_func* cpp_callback=0; // The C function that will actually get registered. extern "C" int CFunk(int i) { return cpp_callback(i); } c_func* foo::GetCallback() { cpp_callback = &Callback; // Only needs to be done once. return &CFunk; }
Rationale (10/99): The Standard correctly reflects the intent of the Committee.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
Is this code valid:
extern "C" void f(); namespace N { int var; extern "C" void f(){ var = 10; } }
The two declarations of f refer to the same external function, but is this a valid way to declare and define f?
And is the definition of f considered to be in namespace N or in the global namespace?
Notes from October 2002 meeting:
Yes, this example is valid. See 9.11 [dcl.link] paragraph 6, which contains a similar example with the definition in the global namespace instead. There is only one f, so the question of whether the definition is in the global namespace or the namespace N is not meaningful. The same function is found by name lookup whether it is found from the declaration in namespace N or the declaration in the global namespace, or both (9.8.4 [namespace.udir] paragraph 4) .
Issue 4 separated the concepts of language linkage for names and language linkage for types; since the names of functions with internal linkage are not visible outside their (C++) translation unit, there is no need to restrict overloading of extern "C" functions with internal linkage, e.g.,
extern "C" { static void f(); static void f(int); }
although the types of such functions still have C language linkage and thus can be called via a function pointer from C code.
The change permitting such overloading, however, has not been widely implemented since the resolution of issue 4, leading some to suggest that the unnecessary restriction on function overloading of such functions should be reimposed.
If it is decided to keep the resolution of issue 4, 9.11 [dcl.link] paragraph 6 should be clarified:
At most one function with a particular name can have C language linkage.
Presumably this was overlooked in implementing the intent of the resolution for the issue and is a likely explanation for the reason it is not more widely implemented.
Rationale (September, 2013):
There was no consensus in CWG for a change to the current rules. 9.11 [dcl.link] paragraph 6 should be read as applying to the C language linkage of the name, not the function type.
According to 9.11 [dcl.link] paragraph 7,
A declaration directly contained in a linkage-specification is treated as if it contains the extern specifier (9.2.2 [dcl.stc]) for the purpose of determining the linkage of the declared name and whether it is a definition. Such a declaration shall not specify a storage class.
This prohibits a declaration like
extern "C++" thread_local int n;
Should this be changed?
Rationale (June, 2014):
This restriction is the same for static and simply requires that the braced form of linkage-specification be used.
The grammar for alignment-specifier in 9.12.1 [dcl.attr.grammar] paragraph 1 is:
There is no such nonterminal as alignment-expression; it should be assignment-expression instead.
Rationale (August, 2011)
This is an editorial issue that has been transmitted to the project editor.
P0028R4 contains this example:
[[ using CC: opt(1), debug ]] void f() {} // Same as [[ CC::opt(1), CC::debug ]] void f() {} [[ using CC: opt(1)]][[ CC::debug ]] void g() {} // Okay (same effect as above).
However, there appears to be no normative justification for the claim that these two attribute-lists have the same effect.
Rationale (February, 2017):
The effects of such attributes are implementation-defined.
According to 9.12.1 [dcl.attr.grammar] paragraph 5,
Each attribute-specifier-seq is said to appertain to some entity or statement, identified by the syntactic context where it appears ( Clause 8 [stmt.stmt], 9.1 [dcl.pre], 9.3 [dcl.decl]). If an attribute-specifier-seq that appertains to some entity or statement contains an attribute or alignment-specifier that is not allowed to apply to that entity or statement, the program is ill-formed.
This does not, but presumably should, mention contract-attribute-specifiers.
Proposed resolution (March, 2019):
Change 9.12.1 [dcl.attr.grammar] paragraph 5 as follows:
Each attribute-specifier-seq is said to appertain to some entity or statement, identified by the syntactic context where it appears (Clause 8 [stmt.stmt], 9.1 [dcl.pre], 9.3 [dcl.decl]). If an attribute-specifier-seq that appertains to some entity or statement contains an attribute, contract-attribute-specifier, or alignment-specifier that is not allowed to apply to that entity or statement, the program is ill-formed. If an attribute-specifier-seq appertains to a friend declaration (11.8.4 [class.friend]), that declaration shall be a definition. No attribute-specifier-seq shall appertain to an explicit instantiation (13.9.3 [temp.explicit]).
Rationale (July, 2019):
With the adoption of paper P1823R0, removing contracts from C++20, this issue is moot.
Although 9.12.2 [dcl.align] paragraph 6 requires that all declarations of a given entity must have the same alignment, enforcing that requirement for class templates would require instantiating all declarations of the template, a process not otherwise needed. For example:
template<int M, int N> struct alignas(M) X; template<int M, int N> struct alignas(N) X {};
The same problem would presumably afflict any attribute applied to a class template.
Rationale (April, 2013):
9.12.2 [dcl.align] paragraph 6 requires that the alignments be “equivalent,” which in a dependent context is specified by 13.7.7.2 [temp.over.link] paragraph 5. The expressions in this example are not equivalent.
The [[noreturn]] attribute, as specified in 9.12.10 [dcl.attr.noreturn], applies to function declarations and is not integrated with the type system. This is incompatible with existing practice (as in gcc) and should be reconsidered.
Rationale (July, 2009):
The CWG reaffirmed the previous decisions not to have attributes apply to types and did not believe that the benefits were sufficient for this case to make an exception to the general rule.
C has rejected the notion of attributes, and introduced the noreturn facility as a keyword. To continue writing clean, portable code we should replace the [[noreturn]] attribute with a noreturn keyword, following the usual convention that while C obfuscates new keywords with _Capital and adds a macro to map to the comfortable spelling, C++ simply adopts the all-lowercase spelling.
Rationale (August, 2010):
CWG felt that an attribute was a more appropriate representation for this feature.
The definition of a “potentially-overlapping subobject” in 6.7.2 [intro.object] paragraph 7 does not exclude non-class subobjects; in particular, 9.12.11 [dcl.attr.nouniqueaddr] makes no restrictions on the types of members declared with the no_unique_address attribute. It is not clear that a potentially-overlapping scalar member or array of scalar elements is useful. Should there be a restriction on the type of potentially-overlapping subjects?
CWG 2022-11-11
Restricting the type of a potentially-overlapping subobject would make it difficult to use no_unique_address on a subobject of dependent type, which may be a non-class type in some, but not all, specializations. Compilers can warn about non-sensical uses in non-dependent contexts.
Non-static data member initializers should not be part of C++0x unless they have implementation experience.
Notes from the August, 2010 meeting:
The C++/CLI dialect has a very similar feature that has been implemented.
Rationale (March, 2011):
The full Committee voted not to remove this feature.
The grammar for member-declarator (11.4 [class.mem]) does not, but should, allow for a brace-or-equal-initializer on a bit-field declarator.
Rationale (October, 2015):
Such a change would introduce a new syntactic ambiguity. CWG also felt uncomfortable with a construct that is visually
expression = expression
not being an assignment expression.
[Detailed description pending.]
Rationale (November, 2016):
The reported issue is no longer relevant to the current working paper.
According to 11.4.1 [class.mem.general] paragraph 7, a noexcept-specifier is a complete-class context. This raises an issue when the function is a friend function; for example, consider:
using T = int; struct B { friend void g(B b) noexcept(sizeof(b.m) >= 4) { } T m = T(); }; int main() { B b; g(b); }
For friend declarations you need to be able to decide at the point of declaration whether it matches a prior declaration, and you can't do that if you treat the noexcept-specifier as a complete-class context.
There is implementation divergence in the treatment of this example.
Notes from the December, 2021 teleconference:
CWG questioned why the declaration matching couldn't be deferred until the end of the class.
CWG 2022-11-10
CWG believes that, in general, a "when needed" approach to parsing complete-class contexts is superior. In the present case, the existing wording clearly requires that the noexcept-specifier be delayed-parsed, which implies that matching the declaration of a friend function to declarations at namespace scope is also delayed.
It's not clear how lookup of a non-dependent qualified name should be handled in a non-static member function of a class template. For example,
struct A { int f(int); static int f(double); }; struct B {}; template<typename T> struct C : T { void g() { A::f(0); } };
The call to A::f inside C::g() appears non-dependent, so one might expect that it would be bound at template definition time to A::f(double). However, the resolution for issue 515 changed 11.4.3 [class.mfct.non.static] paragraph 3 to transform an id-expression to a member access expression using (*this). if lookup resolves the name to a non-static member of any class, making the reference dependent. The result is that if C is instantiated with A, A::f(int) is called; if C is instantiated with B, the call is ill-formed (the call is transformed to (*this).A::f(0), and there is no A subobject in C<B>). Both these results seem unintuitive.
(See also issue 1017.)
Notes from the November, 2010 meeting:
The CWG agreed that the resolution of issue 515 was ill-advised and should be reversed.Rationale (March, 2011):
The analysis is incorrect; whether the reference is dependent or not, overload resolution chooses A::f(int) because of the rules in 12.2.2.2.2 [over.call.func] paragraph 3 dealing with contrived objects for static member functions.
Move semantics for *this should not be part of C++0x unless they have implementation experience.
Rationale (March, 2011):
The full Committee voted not to remove this feature.
An implicitly-declared special member function is defined as deleted (11.4.5 [class.ctor] paragraph 5, 11.4.7 [class.dtor] paragraph 3, 11.4.5.3 [class.copy.ctor] paragraphs 5 and 10) if any of the corresponding functions it would call from base classes is inaccessible. This is inconsistent with the treatment of access control in overload resolution and template argument deduction, where accessibility is ignored (but may result in an ill-formed program). This should be made consistent.
Rationale (July, 2009):
The current treatment is sufficiently useful to warrant the inconsistency with the other handling of access control. In particular, it enables such cases to be detected by SFINAE.
The list of causes for a defaulted default constructor to be defined as deleted, given in 11.4.5 [class.ctor] paragraph 5, should have a case for subobjects of a type with a destructor that is deleted or inaccessible from the defaulted constructor.
Rationale (January, 2012):
The supposedly-missing text is actually already present.
The working paper is quite explicit about
struct X { X(X, X const& = X()); };being illegal (because of the chicken & egg problem wrt copying.)
Shouldn't it be as explicit about the following?
struct Y { Y(Y const&, Y = Y()); };Rationale: There is no need for additional wording. This example leads to a program which either fails to compile (due to resource limits on recursive inlining) or fails to run (due to unterminated recursion). In either case the implementation may generate an error when the program is compiled.
Jack Rouse: In 11.4.5.3 [class.copy.ctor] paragraph 8, the standard includes the following about the copying of class subobjects in such a constructor:
Mike Miller: I'm more concerned about 11.4.5.3 [class.copy.ctor] paragraph 7, which lists the situations in which an implicitly-defined copy constructor can render a program ill-formed. Inaccessible and ambiguous copy constructors are listed, but not a copy constructor with a cv-qualification mismatch. These two paragraphs taken together could be read as requiring the calling of a copy constructor with a non-const reference parameter for a const data member.
Proposed Resolution (November, 2006):
This issue is resolved by the proposed resolution for issue 535.
Rationale (August, 2011):
These concerns have been addressed by other changes.
Section 11.4.5.3 [class.copy.ctor] paragraph 8 says the compiler-generated copy constructor copies scalar elements via the built-in assignment operator. Seems inconsistent. Why not the built-in initialization?
Notes from October 2002 meeting:
The Core Working Group believes this should not be changed. The standard already mentions built-in operators and the assignment operator does clearly define what must be done for scalar types. There is currently no concept of built-in initialization.
Paper N2987 suggests that an implicitly-declared copy or move constructor should be explicit if the corresponding constructor of any of its subobjects is explicit. During the discussion at the October, 2009 meeting, the CWG deemed this a separable question from the major emphasis of that paper, and this issue was opened as a placeholder for that discussion.
See also issue 1051.
Rationale (November, 2010):
The CWG did not see a correlation between the explicitness of a base class constructor and that of an implicitly-declared derived class constructor.
11.4.5.3 [class.copy.ctor] paragraph 12 says that a defaulted move constructor is defined as deleted if the class has a non-static data member or direct or virtual base class with a type that does not have a move constructor and is not trivially copyable. This seems more strict than is necessary; the subobject need not be trivially copyable, it should be enough for the selected constructor not to throw. In any case, the wording should be phrased in terms of the function selected by overload resolution rather than properties of the subobject type, and similarly for move assignment.
Rationale (November, 2010):
The CWG felt that the current specification was consistent and not overly problematic; users can add their own move constructor if needed.
Consider an example like,
struct A { A() = default; A(A&) = default; A(const A&) = default; }; static_assert(!__is_trivially_copyable(A),""); struct B { A a; B() = default; B(const B&) = default; }; static_assert(__is_trivially_copyable(B),""); struct C { mutable A a; C() = default; C(const C&) = default; }; static_assert(!__is_trivially_copyable(C),"");
Presumably, all static_assert() conditions above are desired to evaluate true. Implementations diverge on this.
To decide whether a class is trivially copyable (Clause 11 [class] paragraph 6) , we need to see whether it has a non-trivial copy constructor. So eventually we hit 11.4.5.3 [class.copy.ctor] paragraph 12:
A copy/move constructor for class X is trivial if it is not user-provided, its declared parameter type is the same as if it had been implicitly declared, and if
class X has no virtual functions (11.7.3 [class.virtual]) and no virtual base classes (11.7.2 [class.mi]), and
the constructor selected to copy/move each direct base class subobject is trivial, and
for each non-static data member of X that is of class type (or array thereof), the constructor selected to copy/move that member is trivial;
otherwise the copy/move constructor is non-trivial.
which seem to imply that the copy constructor needs to have been implicitly defined before we consider this rule. But copy ctors are not odr-used in this example, so they're not implicitly-defined (paragraph 13).
The same considerations apply to copy/move assignment operators.
It might be sufficient to clarify this specification by replacing “selected to” with “that would be selected to.”
Rationale (September, 2013):
CWG felt that the existing wording was clear enough.
According to 11.4.5.3 [class.copy.ctor] paragraph 11,
A defaulted move constructor that is defined as deleted is ignored by overload resolution (12.2 [over.match], 12.3 [over.over]). [Note: A deleted move constructor would otherwise interfere with initialization from an rvalue which can use the copy constructor instead. —end note]
Limiting this provision to defaulted move constructors introduces an unfortunate distinction between implicitly and explicitly deleted move constructors. For example, given
#include <iostream> #include <memory> using namespace std; struct Expl { Expl() = default; Expl(const Expl &) { cout << " Expl(const Expl &)" << endl; } Expl(Expl &&) = delete; }; struct Impl : Expl { Impl() = default; Impl(const Impl &) { cout << " Impl(const Impl &)" << endl; } Impl(Impl &&) = default; }; struct Moveable { Moveable() { } Moveable(const Moveable &) { cout << " Moveable(const Moveable &)" << endl; } Moveable(Moveable &&) { cout << " Moveable(Moveable &&)" << endl; } }; template<typename T> struct Container { Moveable moveable[2]; T t; }; int main() { cout << "Expl:" << endl; Container<Expl> c1(move(Container<Expl>())); cout << "Impl:" << endl; Container<Impl> c2(move(Container<Impl>())); }
The output of this program is
Expl: Moveable(const Moveable &) Moveable(const Moveable &) Expl(const Expl &) Impl: Moveable(Moveable &&) Moveable(Moveable &&) Impl(const Impl &)
Should the specification be changed to allow overload resolution to ignore all deleted move constructors instead of only the defaulted ones?
From one perspective, at least, the principal reason to delete a move constructor explicitly is to elicit an error if a move is attempted, and such a change would violate that intent. On the other hand, minimizing the difference between explicit default and implicit default seems like an important goal.
Rationale (February, 2014):
The specification is as intended.
The current wording of the Standard does not make clear whether a special member function that is defaulted and implicitly deleted is trivial. Triviality is visible in various ways that don't involve invoking the function, such as determining whether a type is trivially copyable and determining the result of various type traits. It also factors into some ABI specifications.
(See also issue 1734.)
Notes from the June, 2014 meeting:
CWG felt that deleted functions should be trivial. See also issue 1590.
Additional note, November, 2014:
See paper N4148.
Additional note, October, 2015:
Moved from "extension" status to "open" to allow consideration by CWG. See the additional discussion in issue 1734 for further details. See also issue 1496.
Rationale (October, 2015):
CWG feels that the triviality of a deleted function should be irrelevant. Any cases in which the triviality of a deleted function is observable should be amended to remove that dependency.
EWG has indicated that they are not currently in favor of removing the implicitly declared defaulted copy constructors and assignment operators that are deprecated in 11.4.5.3 [class.copy.ctor] paragraphs 7 and 18. Should this deprecation be removed?
EWG 2022-11-11
EWG expressed no interest to remove the deprecation.
EDG (and g++, for that matter) picks the explicit copy assignment operator, which we think is wrong in this case:
#include <stdio.h> struct D; // fwd declaration struct B { D& operator=(D&); }; struct D : B { D() {} D(int ii) { s = ii; } using B::operator=; int s; }; int main() { D od, od1(10); od = od1; // implicit D::operator=(D&) called, not BASE::operator=(D&) } D& B::operator=(D& d) { printf("B::operator called\n"); return d; }
If you look at 11.4.5.3 [class.copy.ctor] paragraph 10 it explicitly states that in such a case the "using B::operator=" will not be considered.
Steve Adamczyk: The fact that the operator= you declared is (D&) and not (const D&) is fooling you. As the standard says, the operator= introduced by the using-declaration does not suppress the generation of the implicit operator=. However, the generated operator= has the (const D&) signature, so it does not hide B::operator=; it overloads it.
Kerch Holt: I'm not sure this is correct. Going by 12.8 P10 first paragraph we think that the other form "operator=(D&)" is generated because the two conditions mentioned were not met in this case. 1) there is no direct base with a "const [volatile] B&" or "B" operator 2) And no member has a operator= either. This implies the implicit operator is "operator=(D&)". So, if that is the case the "hiding" should happen.
Also, in the last paragraph it seems to state that operators brought in from "using", no matter what the parameter is, are always hidden.
Steve Adamczyk: Not really. I think this section is pretty clear about the fact that the implicit copy assignment operator is generated. The question is whether it hides or overloads the one imported by the using-declaration.
Notes from the March 2004 meeting:
(a) Class B does get an implicitly-generated operator=(const B&); (b) the using-declaration brings in two operator= functions from B, the explicitly-declared one and the implicitly-generated one; (c) those two functions overload with the implicitly-generated operator=(const D&) in the derived class, rather than being hidden by the derived-class function, because it does not match either of their signatures; (d) overload resolution picks the explicitly-declared function from the base class because it's the best match in this case. We think the standard wording says this clearly enough.
Is the following a “copy assignment operator?”
struct A { const A& operator=(const A&) volatile; };
11.4.5.3 [class.copy.ctor] paragraph 9 doesn't say one way or the other whether cv-qualifiers on the function are allowed. (A similar question applies to the const case, but I avoided that example because it seems so wrong one tends to jump to a conclusion before seeing what the standard says.)
Since the point of the definition of “copy assignment operator” is to control whether the compiler generates a default version if the user doesn’t, I suspect the correct answer is that neither const nor volatile cv-qualification on operator= should be allowed for a “copy assignment operator.” A user can write an operator= like that, but it doesn't affect whether the compiler generates the default one.
Proposed Resolution (November, 2006):
Change 11.4.5.3 [class.copy.ctor] paragraph 9 as follows:
A user-declared copy assignment operator X::operator= is a non-static non-template non-volatile non-const member function of class X with exactly one parameter of type X, X&, const X&, volatile X& or const volatile X&.
[Drafting note: If a user-declared volatile operator= prevented the implicit declaration of the copy assignment operator, all assignments for objects of the given class (even to non-volatile objects) would pay the penalty for volatile write accesses in the user-declared operator=, despite not needing it.]
Additional note (December, 2008):
The proposed resolution addresses only cv-qualified assignment operators and is silent on ref-qualified versions. However, it would seem that the spirit of the resolution would indicate that a ref-qualified assignment operator would not be considered a copy assignment operator.
There appears to be an emerging idiom that relies on the idea that providing an lvalue-only assignment operator would prevent assignment to rvalues:
struct A { A& operator=(const A&) &; // disable assignemt to rvalue };
The resolution should also be reconsidered in light of the use of a const-qualified assignment operator as part of the implementation of a proxy class, where the proxy object itself is constant and should not be changed, but the copy assignment operator would apply to the object to which the proxy object refers.
Rationale (March, 2009):
It was decided that cv-qualified and ref-qualified assignment operators should be considered copy assignment operators if they have the required parameter type.
For increased regularity between built-in types and class types, the copy assignment operator can be qualified with &, preventing assignment to an rvalue. The LWG is making that change in the Standard Library. It would seem a good idea to make a similar change, where possible, in the specification of implicitly-declared assignment operators. This would be the case when all subobjects of class type have a non-deleted copy assignment operator that is &-qualified.
Rationale (July, 2009):
The LWG decided not to add reference qualifiers in the library, which reduces the motivation for making this change to implicit assignment operators.
In 11.4.5.3 [class.copy.ctor] paragraph 25, a move assignment operator is defined as deleted if it has any direct or indirect virtual base class. This could be relaxed to apply only if a virtual base is derived from more than once in the DAG.
Rationale (August, 2010):
CWG felt that there was insufficient motivation to change this at this time.
It is not clear whether a using-declaration naming an assignment operator from a base class can be considered to declare a copy assignment operator or not. For example:
struct A; struct B { constexpr A & operator= (const A &); }; struct A : B { using B::operator=; } a { a = a };
There is implementation divergence on the treatment of this code: should the using-declaration suppress or conflict with the implicit declaration of A::operator=?
Rationale (June, 2019):
This question is addressed explicitly by 9.9 [namespace.udecl] paragraph 4:
If a constructor or assignment operator brought from a base class into a derived class has the signature of a copy/move constructor or assignment operator for the derived class (11.4.5.3 [class.copy.ctor], 11.4.6 [class.copy.assign]), the using-declaration does not by itself suppress the implicit declaration of the derived class member; the member from the base class is hidden or overridden by the implicitly-declared copy/move constructor or assignment operator of the derived class, as described below.
Use of a decltype-specifier to name a destructor in an explicit destructor call is explicitly permitted in 11.4.7 [class.dtor] paragraph 13. However, the most straightforward attempt to do so, e.g.,
p->~decltype(*p)()
does not work, because *p is an lvalue and thus decltype(*p) is a reference type, not a class type. Even simply eliminating the reference is not sufficient, because p could be a pointer to a cv-qualified class type.
Either the provision for decltype-specifiers in explicit destructor calls should be removed or the specification should be expanded to allow reference and cv-qualified types to be considered as “denot[ing] the destructor's class type.”
Notes from the April, 2013 meeting:
CWG favored replacing the existing syntax with something more flexible, for example, p->~auto(). This new syntax would also apply to pseudo destructors.
Rationale (November, 2013):
CWG felt that the suggested change should be considered by EWG before the issue is resolved.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 112.
According to 11.4.7 [class.dtor] paragraph 5,
A destructor is trivial if it is not user-provided and if:
the destructor is not virtual,
...
It is not clear why this restriction is needed, and it should be removed if it is not needed.
Rationale (February, 2014):
A trivial destructor is known to perform no actions and thus need not be invoked. A virtual destructor, however, might be member of a base class of an unknown derived class; it must therefore be called virtually in case an overriding virtual function performs some actions.
A posting in comp.lang.c++.moderated prompted me to try the following code:
struct S { template<typename T, int N> (&operator T())[N]; };
The goal is to have a (deducible) conversion operator template to a reference-to-array type.
This is accepted by several front ends (g++, EDG), but I now believe that 11.4.8.3 [class.conv.fct] paragraph 1 actually prohibits this. The issue here is that we do in fact specify (part of) a return type.
OTOH, I think it is legitimate to expect that this is expressible in the language (preferably not using the syntax above ;-). Maybe we should extend the syntax to allow the following alternative?
struct S { template<typename T, int N> operator (T(&)[N])(); };
Eric Niebler: If the syntax is extended to support this, similar constructs should also be considered. For instance, I can't for the life of me figure out how to write a conversion member function template to return a member function pointer. It could be useful if you were defining a null_t type. This is probably due to my own ignorance, but getting the syntax right is tricky.
Eg.
struct null_t { // null object pointer. works. template<typename T> operator T*() const { return 0; } // null member pointer. works. template<typename T,typename U> operator T U::*() const { return 0; } // null member fn ptr. doesn't work (with Comeau online). my error? template<typename T,typename U> operator T (U::*)()() const { return 0; } };
Martin Sebor: Intriguing question. I have no idea how to do it in a single declaration but splitting it up into two steps seems to work:
struct null_t { template <class T, class U> struct ptr_mem_fun_t { typedef T (U::*type)(); }; template <class T, class U> operator typename ptr_mem_fun_t<T, U>::type () const { return 0; } };
Note: In the April 2003 meeting, the core working group noticed that the above doesn't actually work.
Note (June, 2010):
It has been suggested that template aliases effectively address this issue. In particular, an identity alias like
template<typename T> using id = T;
provides the necessary syntactic sugar to be able to specify types with trailing declarator elements as a conversion-type-id. For example, the two cases discussed above could be written as:
struct S { template<typename T, int N> operator id<T[N]>&(); template<typename T, typename U> operator id<T (U::*)()>() const; };
This issue should thus be closed as now NAD.
Rationale (August, 2011):
As given in the preceding note.
Another instance to consider is that of invoking a member function from a null pointer:
struct A { void f () { } }; int main () { A* ap = 0; ap->f (); }
Which is explicitly noted as undefined in 11.4.3 [class.mfct.non.static], even though one could argue that since f() is empty, there is no lvalue->rvalue conversion.
If f is static, however, there seems to be no such rule, and the call is only undefined if the dereference implicit in the -> operator is undefined. IMO it should be.
Incidentally, another thing that ought to be cleaned up is the inconsistent use of "indirection" and "dereference". We should pick one. (This terminology issue has been broken out as issue 342.)
This is related to issue 232.
Rationale (October 2003):
We agreed the example should be allowed. p->f() is rewritten as (*p).f() according to 7.6.1.5 [expr.ref]. *p is not an error when p is null unless the lvalue is converted to an rvalue (7.3.2 [conv.lval]), which it isn't here.
The current wording of 11.4.9.3 [class.static.data] only allows a static data member to be initialized within the class definition if it is const. This restriction should be removed.
Rationale (July, 2009):
The consensus of the CWG was that there is insufficient motivation for such a change. A non-const static data member must still be defined outside the class if it is used, and the value of such a member cannot be used in a constant expression, unlike a constant static data member, so there is no real advantage to putting the initializer inside the class definition instead of in the definition of the static data member. The apparent parallel with non-static data member initialization is also not compelling; for example, the initializer for a non-static data member can contain forward references to members declared later in the class, while the same is not true of static data member initializers.
The Standard requires that a const static data member that is initialized in the class definition must still be defined in namespace scope if it is odr-used. This seems unnecessary.
Rationale (August, 2011):
This is a request for an extension.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 100.
Given an example like
struct X { static constexpr const char *p = "foo"; }; static const char *q = X::p;
if this appears in more than one translation unit, must the value of q be the same in each? The implication of the one-definition rule would be that it must be, but current implementations do not give that result.
Rationale (November, 2014):
The interpretation is correct and the implementations are incorrect.
According to 11.4.9.3 [class.static.data] paragraph 3,
An inline static data member may be defined in the class definition and may specify a brace-or-equal-initializer. If the member is declared with the constexpr specifier, it may be redeclared in namespace scope with no initializer (this usage is deprecated; see _N4778_.D.4 [depr.static_constexpr]).
The out-of-class declaration of a static data member was formerly a definition and thus limited to occurring only once. This limitation was lost when the in-class declaration of inline static data members became the definition; the current specification has no apparent prohibition against multiple out-of-class declarations of a constexpr static data member. Should the restriction be reinstated?
Rationale (November, 2018):
Current implementations support this usage and it does not appear to cause any problems.
11.4.2 [class.mfct] paragraph 5 says this about member functions defined lexically outside the class:
the member function name shall be qualified by its class name using the :: operator
11.4.9.3 [class.static.data] paragraph 2 says this about static data members:
In the definition at namespace scope, the name of the static data member shall be qualified by its class name using the :: operator
I would have expected similar wording in 11.4.12 [class.nest] paragraph 3 for nested classes. Without such wording, the following seems to be legal (and is allowed by all the compilers I have):
struct base { struct nested; }; struct derived : base {}; struct derived::nested {};
Is this just an oversight, or is there some rationale for this behavior?
Rationale (July, 2008):
The wording in Clause 11 [class] paragraph 10 (added by the resolution of issue 284, which was approved after this issue was raised) makes the example ill-formed:
If a class-head contains a nested-name-specifier, the class-specifier shall refer to a class that was previously declared directly in the class or namespace to which the nested-name-specifier refers (i.e., neither inherited nor introduced by a using-declaration), and the class-specifier shall appear in a namespace enclosing the previous declaration.
Is this legal? Should it be?
struct E { union { struct { int x; } s; } v; };
One compiler faults a type definition (i.e. of the anonymous struct) since it is in an anonymous union [11.5 [class.union] paragraph 2: "The member-specification of an anonymous union shall only define non-static data members."].
I would suggest that compiler B is correctly interpreting the standard but that this is a defect in the standard. There is no reason to disallow definition of anonymous structs.
Furthermore, is it really necessary to disallow definition of named types in anonymous unions in general, as long as the types do not need fully qualified names for external linkage? Why should this be illegal?
struct E { union { typedef int Int; struct X { X *next; Int n; } list; } v; };
Notes from October 2002 meeting:
There was agreement that the standard says such declarations are invalid; therefore this must be considered as an extension. There was general feeling that this extension would not be too useful, though Jason Merrill was sympathetic to the argument. It was also agreed that if this were to be changed it would require careful wording so as not to allow too many cases.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
Can a member of a union be of a class that has a user-declared non-default constructor? The restrictions on union membership in 11.5 [class.union] paragraph 1 only mention default and copy constructors:
An object of a class with a non-trivial default constructor (11.4.5 [class.ctor]), a non-trivial copy constructor (11.4.5.3 [class.copy.ctor]), a non-trivial destructor (11.4.7 [class.dtor]), or a non-trivial copy assignment operator (12.4.3.2 [over.ass], 11.4.5.3 [class.copy.ctor]) cannot be a member of a union...
(11.4.5 [class.ctor] paragraph 11 does say, “a non-trivial constructor,” but it's not clear whether that was intended to refer only to default and copy constructors or to any user-declared constructor. For example, 6.7.7 [class.temporary] paragraph 3 also speaks of a “non-trivial constructor,” but the cross-references there make it clear that only default and copy constructors are in view.)
Note (March, 2008):
This issue was resolved by the adoption of paper J16/08-0054 = WG21 N2544 (“Unrestricted Unions”) at the Bellevue meeting.
Rationale (August, 2011):
As given in the preceding note.
According to 11.6 [class.local] paragraph 1,
Declarations in a local class shall not odr-use (6.3 [basic.def.odr]) a variable with automatic storage duration from an enclosing scope.
This restriction should apply as well to the this pointer when the class is local to a non-static member function.
Rationale (January, 2014):
The restrictions in _N4567_.5.1.1 [expr.prim.general] limiting the locations in which this may appear already prevent uses of the containing member function's this where the local class's this does not hide it.
In an example like
struct W {}; struct X : W {}; struct Y : W {}; struct Z : X, Y {}; // Z has two W subobjects struct A { virtual W *f(); }; struct B : A { virtual X *f(); }; struct C : B { virtual Z *f(); // C::f overrides A::f and B::f };
it is not clear whether the return type of C::f() satisfies the requirement of 11.7.3 [class.virtual] bullet 7.2 that the return type in the base class of the function be an unambiguous base of the return type in the derived class. Should the conversion from Z* to X* in overriding B::f() be considered to disambiguate the conversion from Z* to W* in overriding A::f()? There is implementation divergence on this question.
Rationale (May, 2015):
CWG determined that the current wording of the Standard is correct: C::f() overrides both B::f() and A::f(), and the latter overriding is ill-formed because of the ambiguity.
According to 11.7.4 [class.abstract] paragraph 6,
Member functions can be called from a constructor (or destructor) of an abstract class; the effect of making a virtual call (11.7.3 [class.virtual]) to a pure virtual function directly or indirectly for the object being created (or destroyed) from such a constructor (or destructor) is undefined.
This prohibition is unnecessarily restrictive. It should not apply to cases in which the pure virtual function has been defined.
Currently the "pure" specifier for a virtual member function has two meanings that need not be related:
The prohibition of virtual calls to pure virtual functions arises from the first meaning and unnecessarily penalizes those who only need the second.
For example, consider a scenario such as the following. A class B is defined containing a (non-pure) virtual function f that provides some initialization and is thus called from the base class constructor. As time passes, a number of classes are derived from B and it is noticed that each needs to override f, so it is decided to make B::f pure to enforce this convention while still leaving the original definition of B::f to perform its needed initialization. However, the act of making B::f pure means that every reference to f that might occur during the execution of one of B's constructors must be tracked down and edited to be a qualified reference to B::f. This process is tedious and error-prone: needed edits might be overlooked, and calls that actually should be virtual when the containing function is called other than during construction/destruction might be incorrectly changed.
Suggested resolution: Allow virtual calls to pure virtual functions if the function has been defined.
Rationale (February, 2012):
In light of the nontrivial implementation issues such a change would raise, as well as the fact that this restriction has been accepted into the C++ design lexicon for many years, CWG decided not to make a change at this point. Further consideration, if any, should occur within EWG.
Rationale (February, 2014):
EWG determined that no action should be taken on this issue.
An abstract class is permitted to be a final class. Such classes are very nearly useless and should probably be made ill-formed.
Rationale (February, 2012):
Such classes could be used for static members and for access control. CWG saw no need to prohibit them.
class Foo { public: Foo() {} ~Foo() {} }; class A : virtual private Foo { public: A() {} ~A() {} }; class Bar : public A { public: Bar() {} ~Bar() {} };~Bar() calls ~Foo(), which is ill-formed due to access violation, right? (Bar's constructor has the same problem since it needs to call Foo's constructor.) There seems to be some disagreement among compilers. Sun, IBM and g++ reject the testcase, EDG and HP accept it. Perhaps this case should be clarified by a note in the draft.
In short, it looks like a class with a virtual private base can't be derived from.
Rationale: This is what was intended.
Footnote 98 says:
As specified previously in 11.8 [class.access] , private members of a base class remain inaccessible even to derived classes unless friend declarations within the base class declaration are used to grant access explicitly.This footnote does not fit with the algorithm provided in 11.8.3 [class.access.base] paragraph 4 because it does not take into account the naming class concept introduced in this paragraph.
(See also paper J16/99-0002 = WG21 N1179.)
Rationale (10/99): The footnote should be read as referring to immediately-derived classes, and is accurate in that context.
The Standard does not appear to specify how to handle cases in which conflicting access specifications for a member are inherited from different base classes. For example,
struct A { public: int i; }; struct B : virtual public A { protected: using A::i; }; struct C : virtual public A, public B { // "i" is protected from B, public from A };
This question affects both the existing wording of 11.8.3 [class.access.base] paragraph 4 (“m as a member of N is public ... m as a member of N is private ... m as a member of N is protected”) and the proposed wording for issue 385 (“when a nonstatic data member or nonstatic member function is a protected member of its naming class”).
One possible definition of “is public” would be something like, “if any visible declaration of the entity has public access.” One could also plausibly define the access of m in N to be the minimum of all the visible declarations, or even an error if the visible declarations are inconsistent.
11.8.3 [class.access.base] paragraph 1 describes the access of inherited members, so a clarifying statement resolving this issue might plausibly be inserted at the end of that paragraph.
Proposed resolution (October, 2004):
Add the following text as a new paragraph after 11.8.3 [class.access.base] paragraph 1:
If a given base class can be reached along more than one path through a derived class's sub-object lattice (11.7.2 [class.mi]), a member of that base class could have different accessibility in the derived class along different paths. In such cases, the most permissive access prevails. [Example:
struct B { static int i; }; class I: protected B { }; class D1: public B, public I { }; class D2: public I, private B { };i is accessible as a public member of D1 and as a protected member of D2. —end example]
Rationale (03/2005): This question is already covered, in almost identical words, in 11.8.7 [class.paths].
Consider the following example:
template <typename T> struct S1 { };
struct S2 : private S1<int> { };
struct S3 : S2 {
void f() {
S1<int> s1; // #1
}
};
The reference in #1 to S1 finds the injected-class-name of S1<int>, which is private in S2 and thus inaccessible in S3. However, there is implementation divergence on the treatment of this reference, with many accepting the declaration without error, presumably because of the use of the name in a template-id. Should the Standard give special treatment to this usage?
Rationale (November, 2014):
The specification is as intended, and the example is ill-formed.
11.8.4 [class.friend], paragraph 7, says
A name nominated by a friend declaration shall be accessible in the scope of the class containing the friend declaration.
Does that mean the following should be illegal?
class A { void f(); }; class B { friend void A::f(); }; // Error: A::f not accessible from B
I discussed this with Bjarne in email, and he thinks it was an editorial error and this was not the committee's intention. The paragraph seems to have been added in the pre-Kona (24 Sept 1996) mailing, and I could not find anything in the previous meeting's (Stockholm) mailings which led me to believe this was intentional. The only compiler vendor which I think currently implements it is the latest release (2.43) of the EDG front end.
Proposed resolution (10/00):
Remove the first sentence of 11.8.4 [class.friend], paragraph 7.
Rationale (04/01):
After the 10/00 vote to accept this issue as a DR with the proposed resolution, it was noted that the first two sentences of 11.8 [class.access] paragraph 3 cause the proposed change to have no effect:
Access control is applied uniformly to all names, whether the names are referred to from declarations or expressions. [Note: access control applies to names nominated by friend declarations (11.8.4 [class.friend]) and using-declarations (9.9 [namespace.udecl]). ]
In addition to the obvious editing to the text of the note, an exception to the blanket statement in the first sentence would also be required. However, discussion during the 04/01 meeting failed to produce consensus on exactly which names in the friend declaration should be exempted from the requirements of access control.
One possibility would be that only the name nominated as friend should be exempt. However, that approach would make it impossible to name a function as a friend if it used private types in its parameters or return type. Another suggestion was to ignore access for every name used in a friend declaration. That approach raised a question about access within the body of a friend function defined inline in the class body — the body is part of the declaration of a function, but references within the body of a friend function should still be subject to the usual access controls.
Other possibilities were raised, such as allowing the declaration of a friend member function if the declaration were permissible in its containing class, or taking the union of the access within the befriending class and the befriended entity. However, these suggestions would have been complex and difficult to specify correctly.
Ultimately it was decided that the original perceived defect was not sufficiently serious as to warrant the degree of complexity required to resolve it satisfactorily and the issue was consequently declared not to be a defect. It was observed that most of the problems involved with the current state of affairs result from inability to declare a particular member function as a friend; in such cases, an easy workaround is simply to befriend the entire class rather than the specific member function.
Thus says the section 11.8.4 [class.friend]/7 in ISO 14882 C++ standard:
A name nominated by a friend declaration shall be accessible in the scope of the class containing the friend declaration.
The obvious intention of this is to allow a friend declaration to specify a function (or nested class, enum, etc.) that is declared "private" or "protected" in its enclosing class. However, literal interpretation seems to allow a broader access to the befriended function by the whole class that is declaring the friendship.
If the rule were interpreted literally as it is currently written, this would compile (when it, of course, shouldn't be allowed at all):
class C { private: static void f(); }; class D { friend void C::f(); // A name nominated by friend declaration... D() { C::f(); // ... shall be accessible in scope of class declaring friendship } };
Suggested fix is to reword "in the scope of the class containing the friend declaration" to exclude all other references from the scope of the declaring class, except the friend-declaration itself.
Notes from the March 2004 meeting:
We considered this and concluded that the standard is clear enough.
I just received a query from a user of why line #1 in the following snippet is ill-formed:
void g(int (*)(int)); template<class T> class A { friend int f(int) { return 0; } void h() { g(f); // #1 } };
I believe that the special invisibility rule about friends is too complicated and makes life too complicated, especially considering that friends in templates are not templates, nor can they be conveniently rewritten with a “first declare at the namespace scope” rule. I can understand the rules when they make programming easier or prevent some obvious “silly” mistakes; but that does not seem to be the case here.
John Spicer: See two papers that discuss this issue: N0878 by Bill Gibbons, which ultimately gave rise to our current rules, and N0913 by me as an alternative to N0878.
Rationale (April, 2005):
The Standard is clear and consistent; this rule is the result of an explicit decision by the Committee.
After the adoption of the wording for extended friend declarations, we now have this new paragraph in 11.8.4 [class.friend]:
A friend declaration that does not declare a function shall have one of the following forms:
friend elaborated-type-specifier ;
friend simple-type-specifier ;
friend typename-specifier ;
But what about friend class templates? Should the following examples compile in C++0x?
template< template <class> class T > struct A{ friend T; }; template< class > struct C; struct B{ friend C; };
Proposed resolution (June, 2008):
Change 11.8.4 [class.friend] paragraph 3 as follows:
A friend declaration that does not declare a function shall have one of the following forms:
friend elaborated-type-specifier ;
friend simple-type-specifier ;
friend typename-specifier ;
friend ::opt nested-name-specifieropt template-name ;
friend identifier ;
In the last alternative, the identifier shall name a template template-parameter. [Note: a friend declaration may be the declaration in a template-declaration (Clause 13 [temp], 13.7.5 [temp.friend]). —end note] If the
type specifier in afriend declaration designates a (possibly cv-qualified) class type or a class template, that class or template is declared as a friend; otherwise, the friend declaration is ignored. [Example:...
Rationale (September, 2008):
The proposed extension is not needed. The template case can be handled simply by providing a template header:
template <typename T> friend class X<T>;
It appears that naming an implicitly-declared member function in a friend declaration requires the full set of decorations to be specified. For example,
struct A { }; struct B { friend constexpr A::A() noexcept; };
There is implementation variation regarding the enforcement of this requirement, however. Should the Standard provide default treatment for such cases, allowing the simpler
friend A::A();
?
Additional note, April, 2015:
EWG has decided not to make a change in this area.
According to 11.8.4 [class.friend] paragraph 3,
A friend declaration that does not declare a function shall have one of the following forms:
friend elaborated-type-specifier ;
friend simple-type-specifier ;
friend typename-specifier ;
However, many implementations accept
friend enum E;
even though that form is explicitly not allowed by 9.2.9.4 [dcl.type.elab] paragraph 1 (which only permits class-key and not enum-key in friend declarations). Some implementations also accept opaque enumeration declarations like
friend enum E : int;
The latter form could plausibly be used in an example like:
class C {
constexpr static int priv = 15;
friend enum class my_constants;
};
enum class my_constants {
pub = C::priv; // OK because of friend decl
};
(See also issue 2131.)
Notes from the October, 2018 teleconference:
The suggested plausible use for the feature would require additional wording, because the effect of friendship is currently only described for classes and functions, not for enumerations. There does not appear to be a demand for the change.
11.8.5 [class.protected] paragraph 1 says:
When a friend or a member function of a derived class references a protected nonstatic member of a base class, an access check applies in addition to ...Instead of saying "references a protected nonstatic member of a base class", shouldn't this be rewritten to use the concept of naming class as 11.8.3 [class.access.base] paragraph 4 does?
Rationale (04/99): This rule is orthogonal to the specification in 11.8.3 [class.access.base] paragraph 4.
The restrictions on protected access in 11.8.5 [class.protected] apply only to forming pointers to members and to member access expressions. It should be considered whether to extend these restrictions to pointer-to-member expressions as well. For example,
struct base { protected: int x; }; struct derived : base { void foo(base* b) { b->x = 123; // not ok (b->*(&derived::x)) = 123; // ok?! } };
Rationale (August, 2010):
Access applies to use of names, so the check must be done at the point at which the pointer-to-member is formed. It is not possible to tell from the pointer to member at runtime what the access was.
Is the following code well-formed?
struct A { /* */ }; int main() { A a=a; }
Note, that { int a=a; } is pretty legal.
And if so, what is the semantics of the self-initialization of UDT? For example
#include <stdio.h> struct A { A() { printf("A::A() %p\n", this); } A(const A& a) { printf("A::A(const A&) %p %p\n", this, &a); } ~A() { printf("A::~A() %p\n", this); } }; int main() { A a=a; }
can be compiled and prints:
A::A(const A&) 0253FDD8 0253FDD8 A::~A() 0253FDD8
(on some implementations).
Notes from October 2002 meeting:
6.7.3 [basic.life] paragraph 6 indicates that the references here are valid. It's permitted to take the address of a class object before it is fully initialized, and it's permitted to pass it as an argument to a reference parameter as long as the reference can bind directly. Except for the failure to cast the pointers to void * for the %p in the printfs, these examples are standard-conforming.
There appears to be no prohibition of assignments in member initializer expressions (neither mem-initializers nor brace-or-equal-initializers):
struct A { int x; int y = x = 37; };
This seems surprising. Should it be allowed?
Rationale (April, 2013):
CWG saw no problems with the example. It did note, however, that the assignment to x is not an initialization, so x would not be considered to have been initialized by this example.
According to 11.9.3 [class.base.init] paragraph 7,
A mem-initializer where the mem-initializer-id denotes a virtual base class is ignored during execution of a constructor of any class that is not the most derived class.
Presumably “ignored” here means that there will be no runtime effect but that semantic restrictions such as access checking and the ODR must still be applied, but this is not completely clear.
Rationale (October, 2015):
The fact that “ignored” applies only to runtime effects is indicated by the phrase “during execution” in the existing wording. This seems clear enough.
11.9.3 [class.base.init] paragraph 3 singles out base classes when indicating the allowance of typedefs, etc. for the naming of types in a mem-initializer-list. It appears that the omission of the class of the constructor is unintentional.
Rationale (November, 2016):
There was no actual issue; the question was based on a misunderstanding of the current specification.
The second paragraph of section 11.9.5 [class.cdtor] contains the following text:
To explicitly or implicitly convert a pointer (an lvalue) referring to an object of class X to a pointer (reference) to a direct or indirect base class B of X, the construction of X and the construction of all of its direct or indirect bases that directly or indirectly derive from B shall have started and the destruction of these classes shall not have completed, otherwise the conversion results in undefined behavior.
Now suppose we have the following piece of code:
struct a { a() : m_a_data(0) { } a(const a& rhsa) : m_a_data(rhsa.m_a_data) { } int m_a_data; }; struct b : virtual a { b() : m_b_data(0) { } b(const b& rhsb) : a(rhsb), m_b_data(rhsb.m_b_data) { } int m_b_data; }; struct c : b { c() : m_c_data(0) { } c(const c& rhsc) : a(rhsc),// Undefined behaviour when constru- // cting an object of type 'c' b(rhsc), m_c_data(rhsc.m_c_data) { } int m_c_data; }; int main() { c ac1, ac2(ac1); }
The problem with the above snippet is that when the value 'ac2' is being created and its construction gets started, c's copy constructor has first to initialize the virtual base class subobject 'a'. Which requires that the lvalue expression 'rhsc' be converted to the type of the parameter of a's copy constructor, which is 'const a&'. According to the wording quoted above, this can be done without undefined behaviour if and only if b's construction has already started, which is not possible since 'a', being a virtual base class, has to be initialized first by a constructor of the most derived object (11.9.3 [class.base.init]).
The issue could in some cases be alleviated when 'c' has a user-defined copy constuctor. The constructor could default-initialize its 'a' subobject and then initialize a's members as needed taking advantage of the latitude given in paragraph 2 of 11.9.3 [class.base.init].
But if 'c' ends up having the implicitly-defined copy constuctor, there's no way to evade undefined behaviour.
struct c : b { c() : m_c_data(0) { } int m_c_data; }; int main() { c ac1, ac2(ac1); }
Paragraph 8 of 11.4.5.3 [class.copy.ctor] states
The implicitly-defined copy constructor for class X performs a memberwise copy of its subobjects. The order of copying is the same as the order of initialization of bases and members in a user-defined constructor (see 11.9.3 [class.base.init]). Each subobject is copied in the manner appropriate to its type:
- if the subobject is of class type, the copy constructor for the class is used;
Which effectively means that the implicitly-defined copy constructor for 'c' will have to initialize its 'a' base class subobject first and that must be done with a's copy constructor, which will always require a conversion of an lvalue expression of type 'const c' to an lvalue of type 'const a&'. The situation would be the same if all the three classes shown had implicitly-defined copy constructors.
Suggested resolution:
Prepend to paragraph 2 of 11.9.5 [class.cdtor] the following:
Unless the conversion happens in a mem-initializer whose mem-initializer-id designates a virtual base class of X, to explicitly or implicitly convert ...
Notes from the 10/01 meeting:
There is no problem in this example. ac1 is fully initialized before it is used in the initialization of ac2.
Currently, 11.4.5.3 [class.copy.ctor] paragraphs 31-32 apply only to the name of a local variable in determining whether a return expression is a candidate for copy elision or move construction. Would it make sense to extend that to include the right operand of a comma operator?
EWG 2022-11-11
This is a request for a new feature, which should be proposed in a paper to EWG.
The following example does not work as one might expect:
namespace N { class C {}; } int operator +(int i, N::C) { return i+1; } #include <numeric> int main() { N::C a[10]; std::accumulate(a, a+10, 0); }According to 6.5.3 [basic.lookup.unqual] paragraph 6, I would expect that the "+" call inside std::accumulate would find the global operator+. Is this true, or am I missing a rule? Clearly, the operator+ would be found by Koenig lookup if it were in namespace N.
Daveed Vandevoorde: But doesn't unqualified lookup of the operator+ in the definition of std::accumulate proceed in the namespace where the implicit specialization is generated; i.e., in namespace std?
In that case, you may find a non-empty overload set for operator+ in namespace std and the surrounding (global) namespace is no longer considered?
Nathan Myers: Indeed, <string> defines operator+, as do <complex>, <valarray>, and <iterator>. Any of these might hide the global operator.
Herb Sutter: These examples fail for the same reason:
struct Value { int i; }; typedef map<int, Value > CMap; typedef CMap::value_type CPair; ostream & operator<< ( ostream &os, const CPair &cp ) { return os << cp.first << "/" << cp.second.i; } int main() { CMap courseMap; copy( courseMap.begin(), courseMap.end(), ostream_iterator<CPair>( cout, "\n" ) ); } template<class T, class S> ostream& operator<< (ostream& out, pair<T,S> pr) { return out << pr.first << " : " << pr.second << endl; } int main() { map <int, string> pl; copy( pl.begin(), pl.end(), ostream_iterator <places_t::value_type>( cout, "\n" ) ); }This technique (copying from a map to another container or stream) should work. If it really cannot be made to work, that would seem broken to me. The reason is does not work is that copy and pair are in namespace std and the name lookup rules do not permit the global operator<< to be found because the other operator<<'s in namespace std hide the global operator. (Aside: FWIW, I think most programmers don't realize that a typedef like CPair is actually in namespace std, and not the global namespace.)
Bill Gibbons: It looks like part of this problem is that the library is referring to names which it requires the client to declare in the global namespace (the operator names) while also declaring those names in namespace std. This would be considered very poor design for plain function names; but the operator names are special.
There is a related case in the lookup of operator conversion functions. The declaration of a conversion function in a derived class does not hide any conversion functions in a base class unless they convert to the same type. Should the same thing be done for the lookup of operator function names, e.g. should an operator name in the global namespace be visible in namespace std unless there is a matching declaration in std?
Because the operator function names are fixed, it it much more likely that a declaration in an inner namespace will accidentally hide a declaration in an outer namespace, and the two declarations are much less likely to interfere with each other if they are both visible.
The lookup rules for operator names (when used implicitly) are already quite different from those for ordinary function names. It might be worthwhile to add one more special case.
Mike Ball : The original SGI proposal said that non-transitive points of instantiation were also considered. Why, when, and by whom was it added?
Rationale (10/99): This appears to be mainly a program design issue. Furthermore, any attempt to address it in the core language would be beyond the scope of what can be done in a Technical Corrigendum.
Is the following well-formed?
template <typename T> class test { public: operator T& (){ return m_i; } private: T m_i; }; int main() { test<int*> t2; t2 += 1; // Allowed? }
Is it possible that by "assignment operators" (12.2.2.3 [over.match.oper] paragraph 4) only the built-in candidates for operator= (i.e. excluding +=, *=, etc.) were meant? On one hand the plural ("operators") seems to imply that all the assignment operators are considered. OTOH, there has already been a core DR (221) about a missing distinction between "assignment operator" and "compound assignment operators". Is there a similar defect here?
Steve Adamczyk: The standard is ambiguous. However, I think the ARM was fairly clear about "assignment operators" meaning only "=", and I find that Cfront 3.0.1 accepts the test case (with typename changed to class). I don't know whether that's good or bad, but it's at least a precedent. Given the change of Core Issue 221, if we do nothing further conversions are valid on += and therefore this case is valid.
Note that "t2++;" is unquestionably valid, so one could also argue for the status quo (post-221) on the basis of consistency.
Notes from the October 2003 meeting:
We believe the example is well-formed, and no change other than that in issue 221 is needed.
The rules for selecting candidate functions in copy-list-initialization (12.2.2.8 [over.match.list]) differ from those of regular copy-initialization (12.2.2.5 [over.match.copy]): the latter specify that only the converting (non-explicit) constructors are considered, while the former include all constructors but state that the program is ill-formed if an explicit constructor is selected by overload resolution. This is counterintuitive and can lead to surprising results. For example, the call to the function object p in the following example is ambiguous because the explicit constructor is a candidate for the initialization of the operator's parameter:
struct MyList { explicit MyStore(int initialCapacity); }; struct MyInt { MyInt(int i); }; struct Printer { void operator()(MyStore const& s); void operator()(MyInt const& i); }; void f() { Printer p; p({23}); }
Rationale (March, 2011):
The current rules are as intended.
According to 12.2.3 [over.match.viable] paragraph 4,
Third, for F to be a viable function, there shall exist for each argument an implicit conversion sequence (12.2.4.2 [over.best.ics]) that converts that argument to the corresponding parameter of F. If the parameter has reference type, the implicit conversion sequence includes the operation of binding the reference, and the fact that an lvalue reference to non-const cannot be bound to an rvalue and that an rvalue reference cannot be bound to an lvalue can affect the viability of the function (see 12.2.4.2.5 [over.ics.ref]).
The description of an implicit conversion sequence in 12.2.4.2 [over.best.ics] paragraph 6 only discusses the relationship of the types. For example, for a class type, it says,
When the parameter has a class type and the argument expression has the same type, the implicit conversion sequence is an identity conversion.
This ignores whether the conversion can actually be performed, considering explicit qualification of constructors and conversion functions. There is implementation divergence in the handling of an example like:
template<typename T> void f(T); template<typename T> void f(const T &); struct Woof { explicit Woof() = default; explicit Woof(const Woof&) = default; explicit Woof(Woof&&) = default; Woof& operator=(const Woof&) = default; Woof& operator=(Woof&&) = default; }; int main() { const Woof cw{}; f(cw); }
If f(Woof) is viable, the call is ambiguous, even though calling f(Woof) would be ill-formed because of the explicit copy constructor.
This seems to be consistent with the general approach described in 12.2.4.2 [over.best.ics] paragraph 2, even though explicitness is not explicitly mentioned:
Implicit conversion sequences are concerned only with the type, cv-qualification, and value category of the argument and how these are converted to match the corresponding properties of the parameter. Other properties, such as the lifetime, storage class, alignment, accessibility of the argument, whether the argument is a bit-field, and whether a function is deleted (9.5.3 [dcl.fct.def.delete]), are ignored. So, although an implicit conversion sequence can be defined for a given argument-parameter pair, the conversion from the argument to the parameter might still be ill-formed in the final analysis.
Rationale (November, 2018):
The intent is that the example should be ambiguous. As an editorial matter, the “such as” and “so” remarks should be turned into notes.
It's not clear how overloading and partial ordering handle non-deduced pairs of corresponding arguments. For example:
template<typename T> struct A { typedef char* type; }; template<typename T> char* f1(T, typename A<T>::type); // #1 template<typename T> long* f1(T*, typename A<T>::type*); // #2 long* p1 = f1(p1, 0); // #3
I thought that #3 is ambiguous but different compilers disagree on that. Comeau C/C++ 4.3.3 (EDG 3.0.3) accepted the code, GCC 3.2 and BCC 5.5 selected #1 while VC7.1+ yields ambiguity.
I intuitively thought that the second pair should prevent overloading from triggering partial ordering since both arguments are non-deduced and has different types - (char*, char**). Just like in the following:
template<typename T> char* f2(T, char*); // #3 template<typename T> long* f2(T*, char**); // #4 long* p2 = f2(p2, 0); // #5
In this case all the compilers I checked found #5 to be ambiguous. The standard and DR 214 is not clear about how partial ordering handle such cases.
I think that overloading should not trigger partial ordering (in step 12.2.4 [over.match.best]/1/5) if some candidates have non-deduced pairs with different (specialized) types. In this stage the arguments are already adjusted so no need to mention it (i.e. array to pointer). In case that one of the arguments is non-deuced then partial ordering should only consider the type from the specialization:
template<typename T> struct B { typedef T type; }; template<typename T> char* f3(T, T); // #7 template<typename T> long* f3(T, typename B<T>::type); // #8 char* p3 = f3(p3, p3); // #9
According to my reasoning #9 should yield ambiguity since second pair is (T, long*). The second type (i.e. long*) was taken from the specialization candidate of #8. EDG and GCC accepted the code. VC and BCC found an ambiguity.
John Spicer: There may (or may not) be an issue concerning whether nondeduced contexts are handled properly in the partial ordering rules. In general, I think nondeduced contexts work, but we should walk through some examples to make sure we think they work properly.
Rani's description of the problem suggests that he believes that partial ordering is done on the specialized types. This is not correct. Partial ordering is done on the templates themselves, independent of type information from the specialization.
Notes from October 2004 meeting:
John Spicer will investigate further to see if any action is required.
(See also issue 885.)
CWG 2022-11-11
The second function parameter contains template parameters not deducible in its context, thus that parameter does not contribute to partial ordering. There is no implementation divergence. Close as NAD.
It is not clear whether the current treatment of an example like the following is what we want:
template<typename T> void foo(T, int); template<typename T> void foo(T&, ...); struct Q; void fn1(Q &data_vectors) { foo(data_vectors, 0); }
According to 12.2.4.2 [over.best.ics] paragraph 8,
If no conversions are required to match an argument to a parameter type, the implicit conversion sequence is the standard conversion sequence consisting of the identity conversion (12.2.4.2.2 [over.ics.scs]).
This would select the first overload and then fail in attempting to call it because of the incomplete type. On the other hand, it is ill-formed to define or call a function with an incomplete parameter type, although it can be declared, so it might be reasonable to take completeness of the parameter type into consideration for SFINAE purposes. 13.10.3 [temp.deduct] bullet 8.11 says,
[Note: Type deduction may fail for the following reasons:
...
Attempting to create a function type in which a parameter type or the return type is an abstract class type (11.7.4 [class.abstract]).
If a definition of Q were available, we would need to instantiate it to see if it is abstract.
It would seem reasonable for an incomplete type to be invalid as well. That would be consistent with the other rules and general desire not to select functions you can't call based on the template argument types.
Rationale (July, 2017):
CWG determined that no change was needed.
There is a moderately serious problem with the definition of overload resolution. Consider this example:
struct B; struct A { A(B); }; struct B { operator A(); } b; int main() { (void)A(b); }
This is pretty much the definition of "ambiguous," right? You want to convert a B to an A, and there are two equally good ways of doing that: a constructor of A that takes a B, and a conversion function of B that returns an A.
What we discover when we trace this through the standard, unfortunately, is that the constructor is favored over the conversion function. The definition of direct-initialization (the parenthesized form) of a class considers only constructors of that class. In this case, the constructors are the A(B) constructor and the (implicitly-generated) A(const A&) copy constructor. Here's how they are ranked on the argument match:
A(B) | exact match (need a B, have a B) |
A(const A&) | user-defined conversion (B::operator A used to convert B to A) |
In other words, the conversion function does get considered, but it's operating with, in effect, a handicap of one user defined conversion. To put that a different way, this problem is a problem of weighting, not a problem that certain conversion paths are not considered.
I believe the reason that the standard's approach doesn't yield the "intuitive" result is that programmers expect copy constructor elision to be done whenever reasonable, so the intuitive cost of using the conversion function in the example above is simply the cost of the conversion function, not the cost of the conversion function plus the cost of the copy constructor (which is what the standard counts).
Suggested resolution:
In a direct-initialization overload resolution case, if the candidate function being called is a copy constructor and its argument (after any implicit conversions) is a temporary that is the return value of a conversion function, and the temporary can be optimized away, the cost of the argument match for the copy constructor should be considered to be the cost of the argument match on the conversion function argument.
Notes from 10/01 meeting:
It turns out that there is existing practice both ways on this issue, so it's not clear that it is "broken". There is some reason to feel that something that looks like a "constructor call" should call a constructor if possible, rather than a conversion function. The CWG decided to leave it alone.
The structure of 12.2.3 [over.match.viable] paragraph 3 is of the form
X is better than Y if
condition 1, or, if not that,
condition 2, or, if not that,
...
It would be better to de-bullet this description, define the conditions, and then say, “X is better than Y if condition 1, condition 2, ...” This would also avoid the awkward “or, if not that,” phrasing.
Rationale (November, 2014):
CWG expressed a preference for the existing structure of this paragraph over the suggested rewrite. The change from “or, if not that,” to “otherwise” can be handled editorially, if desired.
Consider the following example:
struct S {
operator int() const & { return 0; }
operator char() && { return 0; }
};
void foo(int) {}
void foo(char) {}
int main() {
foo(S{}); //OK, calls foo(char)
}
Here, the ICS for each function is a user-defined conversion involving the same user-defined conversion function, operator char() &&, because of 12.2.4.3 [over.ics.rank] bullet 3.2.3:
S1 and S2 include reference bindings (9.4.4 [dcl.init.ref]) and neither refers to an implicit object parameter of a non-static member function declared without a ref-qualifier, and S1 binds an rvalue reference to an rvalue and S2 binds an lvalue reference
foo(int) is a promotion, while foo(char) is an exact match, so the latter is chosen.
Replacing int and char in this example with non-interconvertible types results in a different outcome:
class A{}; class B{}; struct S { operator A() const & { return A{}; } operator B() && { return B{}; } }; void foo(A) {} void foo(B) {} int main() { foo(S{}); //error: call to foo is ambiguous }
Here, only one of the two user-defined conversion operators is viable for each overload. Consequently, 12.2.4.3 [over.ics.rank] bullet 3.3,
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or they initialize the same class in an aggregate initialization and in either case the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2.
does not apply, unlike the earlier case, because different user-defined conversion functions appear in each conversion sequence and thus the sequences are indistinghishable.
This seems inconsistent.
Suggested resolution:
Change 12.2.4.3 [over.ics.rank] bullet 3.3 as follows:
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if
the initial standard conversion sequence of U1 is better than the initial standard conversion sequence of U2, or
they contain the same user-defined conversion function or constructor or they initialize the same class in an aggregate initialization and in either case the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2.
CWG 2022-11-11
This is an extension that is best addressed by a paper to EWG.
Can p->f, where f refers to a set of overloaded functions all of which are static member functions, be used as an expression in an address-of-overloaded-function context? A strict reading of this section suggests "no", because "p->f" is not the name of an overloaded function (it's an expression). I'm happy with that, but the core group should decide and should add an example to document the decision, whichever way it goes.
Rationale (10/99): The "strict reading" correctly reflects the intent of the Committee, for the reason given, and no clarification is required.
It is unclear whether the following code is well-formed or not:
class A { }; struct B : public A { void foo (); void foo (int); }; int main () { void (A::*f)() = (void (A::*)())&B::foo; }
12.3 [over.over] paragraph 1 says,
The function selected is the one whose type matches the target type required in the context. The target can be
- ...
- an explicit type conversion (7.6.1.4 [expr.type.conv], 7.6.1.9 [expr.static.cast], 7.6.3 [expr.cast]).
This would appear to make the program ill-formed, since the type in the cast is different from that of either interpretation of the address-of-member expression (its class is A, while the class of the address-of-member expression is B)
However, 12.3 [over.over] paragraph 3 says,
Nonstatic member functions match targets of type "pointer-to-member-function;" the function type of the pointer to member is used to select the member function from the set of overloaded member functions.
The class of which a function is a member is not part of the "function type" (9.3.4.6 [dcl.fct] paragraph 4). Paragraph 4 is thus either a misuse of the phrase "function type," a contradiction of paragraph 1, or an explanation of what "matching the target type" means in the context of a pointer-to-member target. By the latter interpretation, the example is well-formed and B::foo() is selected.
Bill Gibbons: I think this is an accident due to vague wording. I think the intent was
The function selected is the one which will make the effect of the cast be that of the identity conversion.
Mike Miller: The "identity conversion" reading seems to me to be overly restrictive. It would lead to the following:
struct B { void f(); void f(int); }; struct D: B { }; void (D::* p1)() = &D::f; // ill-formed void (D::* p2)() = (void (B::*)()) &D::f; // okay
I would find the need for an explicit cast here surprising, since the downcast is a standard conversion and since the declaration of p1 certainly has enough information to disambiguate the overload set. (See also issue 203.)
Bill Gibbons: There is an interesting situation with using-declarations. If a base class member function is present in the overload set in a derived class due to a using-declaration, it is treated as if it were a derived class member function for purposes of overload resolution in a call (12.2.2 [over.match.funcs] paragraph 4):
For non-conversion functions introduced by a using-declaration into a derived class, the function is considered to be a member of the derived class for the purpose of defining the type of the implicit object parameter
There is no corresponding rule for casts. Such a rule would be practical, but if the base class member function were selected it would not have the same class as that specified in the cast. Since base-to-derived pointer to member conversions are implicit conversions, it would seem reasonable to perform this conversion implicitly in this case, so that the result of the cast has the right type. The usual ambiguity and access restrictions on the base-to-derived conversion would not apply since they do not apply to calling through the using-declaration either.
On the other hand, if there is no special case for this, we end up with the bizarre case:
struct A { void foo(); }; struct B : A { using A::foo; void foo(int); } int main() { // Works because "B::foo" contains A::foo() in its overload set. (void (A::*)())&B::foo; // Does not work because "B::foo(int)" does not match the cast. (void (A::*)(int))&B::foo; }
I think the standard should be clarified by either:
Adding a note to 12.3 [over.over] saying that using-declarations do not participate in this kind of overload resolution; or
Modifying 12.3 [over.over] saying that using-declarations are treated as members of the derived class for matching purposes, and if selected, the resulting pointer to member is implicitly converted to the derived type with no access or ambiguity errors. (The using-declaration itself has already addressed both of these areas.)
Rationale (10/00): The cited example is well-formed. The function type, ignoring the class specification, is used to select the matching function from the overload set as specified in 12.3 [over.over] paragraph 3. If the target type is supplied by an explicit cast, as here, the conversion is then performed on the selected pointer-to-member value, with the usual restrictions on what can and cannot be done with the converted value (7.6.1.9 [expr.static.cast] paragraph 9, 7.6.1.10 [expr.reinterpret.cast] paragraph 9).
I understand that the lvalue-to-rvalue conversion was removed in London. I generally agree with this, but it means that ?: needs to be fixed:
Given:
bool test; Integer a, b; test ? a : b;What builtin do we use? The candidates are
operator ?:(bool, const Integer &, const Integer &) operator ?:(bool, Integer, Integer)which are both perfect matches.
(Not a problem in the C++11 FDIS, but misleading.)
Rationale: The description of the conditional operator in 7.6.16 [expr.cond] handles the lvalue case before the prototype is considered.
Now that the restriction against local classes being used as template arguments has been lifted, they are more useful, yet they are still crippled. For some reason or oversight, the restriction against local classes being templates or having member templates was not lifted. Allowing local classes to have member templates facilitates generic programming (the reason for lifting the other restriction), especially when it comes to the visitor-pattern (see the boost::variant documentation and the following example) as implemented in boost and the boost::MPL library (since functors have to be template classes in mpl, and higher-order functors have to have member templates to be useful). A local class with a member template would allow this desirable solution:
#include <boost/variant.hpp> int main() { struct times_two_generic: public boost::static_visitor<> { template <typename T> void operator()(T& operand) const { operand += operand; } }; std::vector<boost::variant<int, std::string>> vec; vec.push_back(21); vec.push_back("hello "); times_two_generic visitor; std::for_each(vec.begin(), vec.end(), boost::apply_visitor(visitor)); }
Is there any compelling reason not to allow this code? Is there any compelling reason not to allow local classes to be templates, have friends, or be able to define their static data members at function scope? Wouldn't this symmetry amongst local and non-local classes make the language more appealing and less embarrassing?
Rationale (June, 2021):
EWG resolved to pursue this topic with paper P2044. It is no longer tracked as a core issue. See vote.
It would be nice to allow template alias within a function scope, and possibly a scoped concept map. As these affect name lookup and resolution, rather than defining new callable code, they are not seen to present the same problems that prevented class and function templates in the past.
Rationale (July, 2009):
This suggestion needs a paper and discussion in EWG before CWG can consider it.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 95.
There doesn't seem to be a good reason for prohibiting C language linkage for function templates with internal linkage, and that could be useful in implementations where the calling convention of a function is determined by its language linkage.
Rationale (August, 2011):
The specification is as desired.
Given an example like
template<const int I> struct S { decltype(I) m; };
what is the type of m? 13.2 [temp.param] paragraph 5 is clear on the question:
The top-level cv-qualifiers on the template-parameter are ignored when determining its type.
It's not clear that this is the desired outcome, however, particularly in light of the resolution of issue 1130. (This does not make any difference for the current language, as a non-type template parameter is a prvalue and non-class prvalues are never cv-qualified. It would have an impact, however, if a future revision of the language were to allow literal class types as non-type template parameters, so if a change is needed, it might be a good idea to do it now.)
Rationale (November, 2010):
As noted, the treatment of cv-qualification of the type of non-type template parameters is irrelevant because they are currently always non-class prvalues. If the language is extended to allow literal class types, a change to the handling of cv-qualification would be upwardly compatible, so nothing needs to be done now.
The example in 13.2 [temp.param] paragraph 15 contains the line
template<T... Values> apply { }; // Values is a non-type template parameter pack // and a pack expansion
This should presumably be struct apply or some such.
Rationale (August, 2011)
This is an editorial issue that has been transmitted to the project editor.
Although 13.2 [temp.param] paragraph 9 forbids default arguments for template parameter packs, allowing them would make some program patterns easier to write. Should this restriction be removed?
Rationale (April, 2013):
CWG felt that removing the restriction was an extension best considered by EWG.
Additional note, April, 2015:
See EWG issue 15.
EWG 2022-11-11
This is a request for a possibly desirable feature, which should be proposed in a paper to EWG.
According to 13.2 [temp.param] paragraph 9,
A default template-argument shall not be specified in the template-parameter-lists of the definition of a member of a class template that appears outside of the member's class.
This presumably is intended to apply to the parameters of the containing class template, not to the parameters of a member template, but the wording should be clarified. (Default arguments are permitted for a template member of a non-template class, and there does not appear to be a good rationale for treating members of a class template differently in this regard.)
Rationale (June, 2014):
CWG felt that the existing wording os clear enough.
Consider the following example:
template<typename T, T V, int n = sizeof(V)> using X = int[n]; template<typename T> void f(X<T, 0>*) {} void g() { f<char>(0); }
Current implementations get confused here because they substitute V=0 into the default argument of X before knowing the type T and end up with f having type void (int (*)[sizeof(0)]), that is, the array bound does not depend on T. It's not clear what should happen here.
Rationale (March, 2016):
There is no problem with the specification, only with the implementations; the default argument for n is dependent because V has a dependent type.
The intended treatment of an example like the following is not clear:
template<class ...Types> struct Tuple_ { // _VARIADIC_TEMPLATE
template<Types ...T> int f() {
return sizeof...(Types);
}
};
int main() {
Tuple_<char,int> a;
int b = a.f();
}
According to 13.2 [temp.param] paragraph 19,
If a template-parameter is a type-parameter with an ellipsis prior to its optional identifier or is a parameter-declaration that declares a pack (9.3.4.6 [dcl.fct]), then the template-parameter is a template parameter pack (13.7.4 [temp.variadic]). A template parameter pack that is a parameter-declaration whose type contains one or more unexpanded packs is a pack expansion. Similarly, a template parameter pack that is a type-parameter with a template-parameter-list containing one or more unexpanded packs is a pack expansion. A template parameter pack that is a pack expansion shall not expand a template parameter pack declared in the same template-parameter-list.
with the following example:
template <class... T> struct value_holder { template <T... Values> struct apply { }; // Values is a non-type template parameter pack }; // and a pack expansion
There is implementation divergence on the treatment of the example, with some rejecting it on the basis that the arguments for Tuple_::f cannot be deduced, while others accept it.
Rationale (December, 2018):
The example is ill-formed because the packs have different sizes: Types has 2, T has 0 (from the call).
The discussion in of the use of typename with a qualified-id in a template parameter-declaration in 13.3 [temp.names] paragraph 2 is confusing:
typename followed by an unqualified-id names a template type parameter. typename followed by a qualified-id denotes the type in a non-type parameter-declaration.
This rule would be clearer if the unqualified-id case were described in terms of resolving the ambiguity of declaring a template parameter name versus referring to a type-name from the enclosing scope, and if the qualified-id case referred to the use of the typename keyword with dependent types in 13.8 [temp.res]. An example would also be helpful.
Rationale (April, 2006):
The CWG felt that the wording was already clear enough.
According to 13.7.3 [temp.mem] paragraph 4,
A specialization of a member function template does not override a virtual function from a base class.Bill Gibbons: I think that's sufficiently surprising behavior that it should be ill-formed instead.
As I recall, the main reason why a member function template cannot be virtual is that you can't easily construct reasonable vtables for an infinite set of functions. That doesn't apply to overrides.
Another problem is that you don't know that a specialization overrides until the specialization exists:
struct A { virtual void f(int); }; struct B : A { template<class T> void f(T); // does this override? };But this could be handled by saying:
template<int I> struct X; struct A { virtual void f(A<5>); }; struct B : A { template<int I, int J> void f(A<I+J>); // does not overrride }; void g(B *b) { X<t> x; b->f<3,2>(x); // specialization B::f(A<5>) makes program ill-formed }So I think there are reasonable semantics. But is it useful?
If not, I think the creation of a specialization that would have been an override had it been declared in the class should be an error.
Daveed Vandevoorde: There is real code out there that is written with this rule in mind. Changing the standard on them would not be good form, IMO.
Mike Ball: Also, if you allow template functions to be specialized outside of the class you introduce yet another non-obvious ordering constraint.
Please don't make such a change after the fact.
John Spicer: This is the result of an explicit committee decision. The reason for this rule is that it is too easy to unwittingly override a function from a base class, which was probably not what was intended when the template was written. Overriding should be a conscious decision by the class writer, not something done accidentally by a template.
Rationale (10/99): The Standard correctly reflects the intent of the Committee.
Notes from October 2002 meeting:
This was reopened because of a discussion while reviewing possible extensions.
Notes from April 2003 meeting:
This was discussed again, and the consensus was that we did not want to make a change, and in particular we did not want to make it an error and risk breaking existing code.
The list of contexts in which pack expansions can occur, in 13.7.4 [temp.variadic] paragraph 4, does not include a function call, in spite of the comments in the example there that assume that a function call is such a context.
Rationale (August, 2010):
initializer-list, mentioned in 13.7.4 [temp.variadic], is used in the argument list of a function call.
A specialization of a variadic function template can produce the same function signature as a non-variadic one; in particular, a class can end up with multiple default constructors if a pack expansion is empty. It would be helpful if such a specialization could be suppressed so that the non-variadic function were preferred.
Rationale (October, 2012):
It can be argued that this is not a defect in the language but simply something that must be considered by the programmer: if the default constructor and the empty-pack-expansion constructor do the same thing, the default constructor is superfluous, while if they do different things there may be a logic error in one or the other. EWG should resolve the policy question of whether this situation should receive special treatment in the language to make it well-formed.
Rationale (February, 2014):
EWG determined that no action should be taken on this issue. There is an existing workaround for the problem, and it will also be addressed by the Concepts Lite proposal.
Issue 1
Paragraph 1 says that a friend of a class template can be a template. Paragraph 2 says: A friend template may be declared within a non-template class. A friend function template may be defined within a non-template class.
I'm not sure what this wording implies about friend template definitions within template classes. The rules for class templates and normal classes should be the same: a function template can be declared or defined, but a class template can only be declared in a friend declaration.
Issue 2
Paragraph 4 says: When a function is defined in a friend function declaration in a class template, the function is defined when the class template is first instantiated. I take it that this was intended to mean that a function that is defined in a class template is not defined until the first instantiation. I think this should say that a function that is defined in a class template is defined each time the class is instantiated. This means that a function that is defined in a class template must depend on all of the template parameters of the class template, otherwise multiple definition errors could occur during instantiations. If we don't have a rule like this, compilers would have to compare the definitions of functions to see whether they are the same or not. For example:
template <class T> struct A { friend int f() { return sizeof(T); } }; A<int> ai; A<long> ac;I hope we would all agree that this program is ill-formed, even if long and int have the same size.
From Bill Gibbons:
[1] That sounds right.
[2] Whenever possible, I try to treat instantiated class templates as if they were ordinary classes with funny names. If you write:
struct A_int { friend int f() { return sizeof(int); } }; struct A_long { friend int f() { return sizeof(long); } };it is a redefinition (which is not allowed) and an ODR violation. And if you write:
template <class T, class U> struct A { friend int f() { return sizeof(U); } }; A<int,float> ai; A<long,float> ac;the corresponding non-template code would be:
struct A_int_float { friend int f() { return sizeof(float); } }; struct A_long_float { friend int f() { return sizeof(float); } };then the two definitions of "f" are identical so there is no ODR violation, but it is still a redefinition. I think this is just an editorial clarification.
Rationale (04/99): The first sub-issue reflects wording that was changed to address the concern before the IS was issued. A close and careful reading of the Standard already leads to the conclusion that the example in the second sub-issue is ill-formed, so no change is needed.
The status of an example like the following is not clear:
template<class> struct x { template<class T> friend bool operator==(x<T>, x<T>) { return false; } }; int main() { x<int> x1; x<double> x2; x1 == x1; x2 == x2; }
Such a friend definition is permitted by 13.7.5 [temp.friend] paragraph 2:
A friend function template may be defined within a class or class template...
Paragraph 4 appears to be related, but deals only with friend functions, not friend function templates:
When a function is defined in a friend function declaration in a class template, the function is instantiated when the function is odr-used. The same restrictions on multiple declarations and definitions that apply to non-template function declarations and definitions also apply to these implicit definitions.
Rationale (February, 2021):
The resolution of issue 2174 deleted the paragraph in question and makes clear the treatment of friend function templates.The current wording is not clear how to declare that a nested class template of a class template is a friend of its containing template. For example, is
template <class T> struct C { template <bool b> class Foo; template <bool b> friend class Foo; };
correct, or should it be
template <class T> struct C { template <bool b> class Foo; template <class X> template <bool b> friend class C<X>::Foo; };
Rationale (June, 2018)
The submitter asked that the issue be withdrawn.
Library issue 225 poses the following questions:
For example, a programmer might want to provide a version of std::swap that would be used for any specialization of a particular class template. It is possible to do that for specific types, but not for all specializations of a template.
The problem is due to the fact that programmers are forbidden to add overloads to namespace std, although specializations are permitted. One suggested solution would be to allow partial specialization of function templates, analogous to partial specialization of class templates.
Library issue 225 contains a detailed proposal for adding partial specialization of function templates (not reproduced here in the interest of space and avoiding multiple-copy problems). This Core issue is being opened to provide for discussion of the proposal within the core language working group.
Notes from 10/00 meeting:
A major concern over the idea of partial specialization of function templates is that function templates can be overloaded, unlike class templates. Simply naming the function template in the specialization, as is done for class specialization, is not adequate to identify the template being specialized.
In view of this problem, the library working group is exploring the other alternative, permitting overloads to be added to functions in namespace std, as long as certain restrictions (to be determined) are satisfied.
(See also documents N1295 and N1296 and issue 285.)
Notes from 10/01 meeting:
The Core Working Group decided to ask the Library Working Group for guidance on whether this feature is still needed to resolve a library issue. The answer at present is "we don't know."
Rationale (October, 2004):
The Core Working Group decided that the Evolution Working Group is the appropriate forum in which to explore the desirability and form of this feature.
Note (March, 2008):
The Evolution Working Group recommended closing this issue with no further consideration. See paper J16/07-0033 = WG21 N2173.
There does not appear to be a way to declare (not define) a partial specialization of a static data member template outside its class. The rule for explicit specializations (13.9.4 [temp.expl.spec] paragraph 13) is that the presence or absence of an initializer determines whether the explicit specialization is a definition or not. Applying this rule to the partial specialization case, however, would conflict with being able to provide an initializer on the declaration within the class.
Do we need to support declaring partial specializations of static data member templates outside their class?
Rationale (February, 2014):
CWG felt that this issue is more appropriately considered by EWG.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 132.
One of the restrictions on partial specializations found in 13.7.6.1 [temp.spec.partial.general] paragraph 9 is:
The template parameter list of a specialization shall not contain default template argument values. [Footnote: There is no way in which they could be used. —end footnote]
The rationale for this restriction is incorrect, since default template argument values can be used to trigger SFINAE and thus control whether a particular partial specialization is used. An example of this use is:
template <typename T> struct a; template <typename T, typename = typename std::enable_if<some property>::type> struct a<std::vector<T>> { ... };
which is forbidden by this point. Note also that an example like
template <typename T> struct b; template <typename T, typename = typename std::enable_if<some property>::type> struct b<T> { ... };
is likely forbidden by the previous bullet:
The argument list of the specialization shall not be identical to the implicit argument list of the primary template.
This restriction may also need to be weakened.
Rationale (April, 2013)
CWG felt that consideration of these suggestions was more appropriately done by EWG.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 110.
Although 14.5 [except.spec] paragraph 3 says,
Two exception-specifications are compatible if:
...
both have the form noexcept(constant-expression) and the constant-expressions are equivalent, or
...
it is not clear whether “equivalent” in this context should be taken as a reference to the definition of equivalent given in 13.7.7.2 [temp.over.link] paragraph 5:
Two expressions involving template parameters are considered equivalent if two function definitions containing the expressions would satisfy the one definition rule (6.3 [basic.def.odr]), except that the tokens used to name the template parameters may differ as long as a token used to name a template parameter in one expression is replaced by another token that names the same template parameter in the other expression.
since the context there is expressions that appear in function template parameters and return types.
There is implementation variance on this question.
Rationale (February, 2021):
The text in question no longer appears in the Standard.
Issue 1:
13.7.7.3 [temp.func.order] paragraph 2 says:
Given two overloaded function templates, whether one is more specialized than another can be determined by transforming each template in turn and using argument deduction (13.10.3 [temp.deduct] ) to compare it to the other.13.10.3 [temp.deduct] now has 4 subsections describing argument deduction in different situations. I think this paragraph should point to a subsection of 13.10.3 [temp.deduct] .
Rationale:
This is not a defect; it is not necessary to pinpoint cross-references to this level of detail.
Issue 2:
13.7.7.3 [temp.func.order] paragraph 4 says:
Using the transformed function parameter list, perform argument deduction against the other function template. The transformed template is at least as specialized as the other if, and only if, the deduction succeeds and the deduced parameter types are an exact match (so the deduction does not rely on implicit conversions).In "the deduced parameter types are an exact match", the terms exact match do not make it clear what happens when a type T is compared to the reference type T&. Is that an exact match?
Issue 3:
13.7.7.3 [temp.func.order] paragraph 5 says:
A template is more specialized than another if, and only if, it is at least as specialized as the other template and that template is not at least as specialized as the first.What happens in this case:
template<class T> void f(T,int); template<class T> void f(T, T); void f(1,1);For the first function template, there is no type deduction for the second parameter. So the rules in this clause seem to imply that the second function template will be chosen.
Rationale:
This is not a defect; the standard unambiguously makes the above example ill-formed due to ambiguity.
Additional note (April, 2011):
These points appear to have been addressed by previous resolutions, so presumably the issue is now NAD.
Rationale (August, 2011):
As given in the preceding note.
The relative order of template parameter pack expansion and alias template substitution is not clear in the current wording. For example, in
template<typename T> using Int = int; template<typename ...Ts> struct S { typedef S<Int<Ts>...> other; };
it is not clear whether int is substituted for Int<Ts> first, leaving the ellipsis with no parameter pack to expand, or whether the pack expansion is to be applied first, producing a list of specializations of Int<T>.
(See also issue 1558.)
Rationale (October, 2012):
The latter interpretation (a list of specializations) is the correct interpretation; a parameter pack can't be substituted into anything, including an alias template specialization. CWG felt that this is clear enough in the current wording.
Consider the following example:
template <class T> struct Outer { struct Inner { Inner* self(); }; }; template <class T> Outer<T>::Inner* Outer<T>::Inner::self() { return this; }
According to 13.8 [temp.res] paragraph 3 (before the salient wording was inadvertently removed, see issue 559),
A qualified-id that refers to a type and in which the nested-name-specifier depends on a template-parameter (13.8.3 [temp.dep]) but does not refer to a member of the current instantiation (13.8.3.2 [temp.dep.type]) shall be prefixed by the keyword typename to indicate that the qualified-id denotes a type, forming a typename-specifier.
Because Outer<T>::Inner is a member of the current instantiation, the Standard does not currently require that it be prefixed with typename when it is used in the return type of the definition of the self() member function. However, it is difficult to parse this definition correctly without knowing that the return type is, in fact, a type, which is what the typename keyword is for. Should the Standard be changed to require typename in such contexts?
Rationale (February, 2021):
The current wording of 13.8.1 [temp.res.general] bullet 5.2.1 makes clear that the typename keyword is not required for the given example.
The following appears to be well-formed, with templates foo() being distinct since any type T will produce an invalid type for the second parameter for at least one foo() when T is replaced within the non-deduced context:
template <typename T> bool *foo(T *, enum T::u_type * = 0) { return 0; } template <typename T> char *foo(T *, struct T::u_type * = 0) { return 0; } struct A { enum u_type { I }; }; int main(void) { foo((A*)0); }
In particular, while determining the signature for the function templates foo(), an elaborated-type-specifier qualifies as part of the decl-specifier-seq under 9.3.4.6 [dcl.fct] paragraph 5 in determining the type of a parameter in the parameter-type-list (absent additional wording). Also, the return type is included in the signature of a function template.
Implementations do not appear to support this case and the ability to do so brings little value since type traits such as is_enum and is_class cannot be defined using this and equivalent functionality can be achieved using the aforementioned type traits.
Rationale (August, 2010):
The specification is as intended; compilers should handle cases like these.
Recently a customer sent us code of the form,
template<typename T> void f(); template<> void f<int>() { } template<typename T> void f() { static_assert(false, "f() instantiated with non-int type."); }
The intent, obviously, was to do compile-time diagnosis of specializations of the template that were not supported, and code of this form is supported by at least some implementations. However, the current wording of 13.8 [temp.res] paragraph 8, appears to invalidate this approach:
If no valid specialization can be generated for a template, and that template is not instantiated, the template is ill-formed, no diagnostic required.
In this example, the static_assert will fail for every generated specialization of f(), so an implementation can issue the error, regardless of whether f() is ever instantiated with a non-int type or not.
A relatively straightforward but somewhat ugly workaround is to define a template like
template<typename> struct always_false { static const bool val = false; };
and replace the use of false in the static_assert with always_false<T>::val, making the static_assert dependent.
Considering the fact that a non-dependent static_assert-declaration in a template is otherwise pretty useless, however, it might be worth considering whether to support this usage somehow, especially in light of the fact that it is supported by some implementations, perhaps by treating static_assert-declarations as always dependent, even if the condition is not otherwise dependent.
Rationale (October, 2012):
Although this usage is supported by some implementations and used in some libraries, CWG felt that =delete is the appropriate mechanism for making a function template or member function of a class template unavailable for specialization.
Given that the type-id in an alias-declaration is unambiguously a type, is there a reason to require the typename keyword for dependent types appearing there? In other contexts where a dependent name can only be a type (e.g., in a base-specifier), the keyword can/must be omitted.
Rationale (October, 2012):
CWG felt that having a simple rule (advising use of typename with all dependent nested types wherever syntactically permitted) was more important than reducing the number of contexts in which the requirement applied.
According to 13.8 [temp.res] paragraph 8,
No diagnostic shall be issued for a template for which a valid specialization can be generated.
One sentence later, it says,
If every valid specialization of a variadic template requires an empty template parameter pack, the template is ill-formed, no diagnostic required.
This appears to be a contradiction: in the latter case, there is postulated to exist a “valid” specialization (with an empty pack expansion), for which a diagnostic might or might not be issued. The first quoted sentence, however, forbids issuing a diagnostic for a template that has at least one valid specialization.
Rationale (February, 2017):
The text in question was revised editorially and the issue is now moot.
Paragraphs 3-4 of 13.8 [temp.res] read, in part,
When a qualified-id is intended to refer to a type that is not a member of the current instantiation (13.8.3.2 [temp.dep.type]) and its nested-name-specifier refers to a dependent type, it shall be prefixed by the keyword typename, forming a typename-specifier. If the qualified-id in a typename-specifier does not denote a type, the program is ill-formed.
If a specialization of a template is instantiated for a set of template-arguments such that the qualified-id prefixed by typename does not denote a type, the specialization is ill-formed.
The former requirement is intended to apply to the definition and the latter to an instantiation of a template, but that intent is not completely clear, leading to the perception that they are redundant.
Rationale (February, 2021):
The specification, now found in 13.8.1 [temp.res.general], particularly in bullet 8.5, is clearer in this regard.
A gcc hacker recently sent in a patch to make the compiler give an error on code like this:
template <template <typename> class T> struct A { }; template <typename U> struct B { A<B> *p; };presumably because the DR from issue 176 says that we decide whether or not B is to be treated as a template depending on whether a template-argument-list is supplied. I think this is a drafting oversight, and that B should also be treated as a template when passed as a template template parameter. The discussion in the issue list only talks about making the name usable both as a class and as a template.
John Spicer: This case was explicitly discussed and it was agreed that to use the injected name as a template template parameter you need to use the non-injected name.
A (possibly unstated) rule that I've understood about template arguments is that the form of the argument (type/nontype/template) is based only on the argument and not on the kind of template parameter. An example is that "int()" is always "function taking no arguments returning int" and never a convoluted way of saying zero.
In a similar way, we now decide whether or not something is a template based only on the form of the argument.
I think this rule is important for two kinds of cases. The first case involves explicit arguments of function templates:
template <template <typename> class T> void f(){} // #1 template <class T> void f(){} // #2 template <typename U> struct B { void g() { f<B>(); } }; int main() { B<int> b; b.g(); }
With the current rules, this uses B as a type argument to template #2.
The second case involves the use of a class template for which the template parameter list is unknown at the point where the argument list is scanned:
template <class T> void f(){} template <typename U> struct B { void g() { f< U::template X<B> >(); // what is B? } }; struct Z1 { template <class T> struct X {}; }; struct Z2 { template <template <class> class T> struct X {}; }; int main() { B<Z1> b1; b1.g(); B<Z2> b2; b2.g(); }
If B could be used as a template name we would be unable to decide how to treat B at the point that it was scanned in the template argument list.
So, I think it is not an oversight and that it should be left the way it is.
Notes from the 4/02 meeting:
It was agreed that this is Not a Defect.There is some question as to whether 13.8.3 [temp.dep] paragraph 3 applies to the definition of an explicitly-specialized member of a class template:
In the definition of a class template or a member of a class template, if a base class of the class template depends on a template-parameter, the base class scope is not examined during unqualified name lookup either at the point of definition of the class template or member or during an instantiation of the class template or member.
Consider an example like the following:
template <class T> struct A { void foo() {} }; template <class T> struct B: A<T> { int bar(); }; int foo() { return 0; } template <> int B<int>::bar() { return foo(); } int main() { return B<int>().bar(); }
Does foo in the definition of B<int>::bar() refer to ::foo() or to A<int>::foo()?
Rationale (April, 2006):
An explicitly-specialized member of a class template is not, in fact, a member of a class template but a member of a particular specialization of that template. The special treatment of lookup vis-a-vis dependent base classes in 13.8.3 [temp.dep] thus does not apply, and base class members are found by unqualified name lookup.
Consider the following example:
struct A { virtual void f() { /* base */ } }; struct B : virtual A { virtual void f() { /* derived */ } }; template<typename T> struct C : virtual A, T { void g() { this->f(); } }; int main() { C<B> c; c.g(); }
This is reasonable C++03 code that is invalidated by the resolution of issue 1043. In the presence of virtual non-dependent base classes and other dependent base classes, one cannot rely on something being found for real when doing the lookup in the instantiation context (therefore, one cannot know whether a "typename" is actually valid or not, without knowing all dependent base classes).
Rationale (August, 2011):
This example is not sufficient motivation to revisit the outcome of issue 1043. ((T*)this)->f() can be used to allow lookup in a dependent base.
It does not appear to be possible to use the name of an alias template (without a template argument list) to refer to the current instantiation.
Rationale (August, 2011):
The rules are as intended.
According to 13.8.2 [temp.local] paragraph 1, when the injected-class-name of a class template is not followed by a template-argument-list or otherwise used as a template-name,
it is equivalent to the template-name followed by the template-parameters of the class template enclosed in <>.
This use of the template-parameters of the class template should make the injected-class-name a dependent type; however, the definition of dependent types in 13.8.3.2 [temp.dep.type] paragraph 8 applies to the injected-class-name only when it appears in a simple-template-id. An additional case is needed for the bare injected-class-name.
Rationale (June, 2014):
The fact that the use of the bare injected-class-name is described as “equivalent” to the simple-template-id is sufficiently clear regarding its status that no additional entry is needed in the list of dependent types.
Is the comma expression in the following dependent?
template <class T> static void f(T) { } template <class T> void g(T) { f((T::x, 0)); } struct A { static int x; }; void h() { g(A()); }
According to the standard, it is, because 13.8.3.3 [temp.dep.expr] says that an expression is dependent if any of its sub-expressions is dependent, but there is a question about whether the language should say something different. The type and value of the expression are not really dependent, and similar cases (like casting T::x to int) are not dependent.
Mark Mitchell: If the first operand is dependent, how do we know it does not have an overloaded comma operator?
Rationale (October, 2004):
CWG agreed that such comma expressions are and ought to be dependent, for the reason expressed in Mark Mitchell's comment.
Does Koenig lookup create a point of instantiation for class types? I.e., if I say:
TT<int> *p; f(p);The namespaces and classes associated with p are those associated with the type pointed to, i.e., TT<int>. However, to determine those I need to know TT<int> bases and its friends, which requires instantiation.
Or should this be special cased for templates?
Rationale: The standard already specifies that this creates a point of instantiation.
A related question to that raised in issue 488 is whether member function templates must be instantiated if the compiler can determine that they will not be needed by the function selected by overload resolution. That is explicitly specified for class templates in 13.9.2 [temp.inst] paragraph 5:
If the overload resolution process can determine the correct function to call without instantiating a class template definition, it is unspecified whether that instantiation actually takes place.
Should the same be true for member function templates? In the example from issue 488,
struct S { template <typename T> S(const T&); }; void f(const S&); void f(int); void g() { enum E { e }; f(e); // ill-formed? }
a compiler could conceivably determine that f(int) would be selected by overload resolution (because it involves only an integral promotion, while the alternative requires a user-defined conversion) without instantiating the declaration of the S constructor. Should the compiler have that freedom?
Rationale (April, 2005):
In order for this question to come up, there would need to be a “gap” between the the normal rules and the rules for template argument deduction failure. The resolution for issue 488 will close the only such gap of which the CWG is aware. The issue can be reopened if other such cases turn up.
Is the second explicit instantiation below well-formed?
template <class T> struct A { template <class T2> void f(T2){} }; template void A<int>::f(char); // okay template template void A<int>::f(float); // ?Since multiple "template<>" clauses are permitted in an explicit specialization, it might follow that multiple "template" keywords should also be permitted in an explicit instantiation. Are multiple "template" keywords not allowed in an explicit instantiation? The grammar permits it, but the grammar permits lots of stuff far weirder than that. My opinion is that, in the absence of explicit wording permitting that kind of usage (as is present for explicit specializations) that such usage is not permitted for explicit instantiations.
Rationale (04/99): The Standard does not describe the meaining of multiple template keywords in this context, so the example should be considered as resulting in undefined behavior according to Clause 3 [intro.defs] “undefined behavior.”
The current language specification allows suppression of implicit instantiations of templates via an explicit instantiation declaration; if all uses of a particular specialization follow an explicit instantiation declaration for that specialization, and there is one explicit instantiation definition in the program, there will be only a single copy of that instance. However, the Standard does not require the presence of an explicit instantiation declaration prior to use, so implementations must still be prepared (using weak symbols, for example) to handle multiple copies of the instance at link time. This can be a significant overhead, particularly in shared libraries where weak symbols must be resolved at load time. Requiring the presence of an explicit instantiation declaration in every translation unit in which the specialization is used would allow the compiler to emit strong symbols for the explicit instantiation definition and reduce the overhead.
On the other hand, the current definition allows use of multiple independent libraries with explicit instantiation directives of the same specializations (from a common third-party library, for instance), as well as incremental migration of libraries to use of explicit instantiation declarations rather than requiring all libraries to be updated at once.
Rationale (August, 2010):
CWG prefers the current specification.
According to 13.9.3 [temp.explicit] paragraph 9,
Except for inline functions and class template specializations, explicit instantiation declarations have the effect of suppressing the implicit instantiation of the entity to which they refer.
This means that an implementation cannot do inline expansion of an extern template function or member function, because that would require its instantiation. As a result, adding an explicit instantiation declaration can affect performance, even though the user only intended to suppress out-of-line copies of functions.
Rationale (August, 2010):
If implementations are allowed to do speculative instantiation for the purpose of inlining, there could be silent changes of meaning depending on whether the instantiation is done or not.
Consider the following example:
template <typename T> extern const decltype(sizeof 0) Sz = sizeof(T); extern template const decltype(sizeof 0) Sz<int>; constexpr decltype(sizeof 0) x = Sz<int>;
C++14 allows this, exempting “const variables of literal type" from the effects of an explicit instantiation declaration:
Except for inline functions, declarations with types deduced from their initializer or return value (9.2.9.6 [dcl.spec.auto]), const variables of literal types, variables of reference types, and class template specializations, explicit instantiation declarations have the effect of suppressing the implicit instantiation of the entity to which they refer. [Note: The intent is that an inline function that is the subject of an explicit instantiation declaration will still be implicitly instantiated when odr-used (6.3 [basic.def.odr]) so that the body can be considered for inlining, but that no out-of-line copy of the inline function would be generated in the translation unit. —end note]
Should there be a DR against C++11 for the similar case of a static data member of a class template?
Rationale (October, 2015):
CWG agreed that this was a defect in C++11, but it is addressed in C++14.
Consider:
template <class F> F foo() { return 1; } template <class F> struct S { F foo() { return 1; } }; extern template int foo<int>(); extern template struct S<int>; int bar() { return foo<int>() + S<int>().foo(); }
An implementation is permitted to instantiate (and thus locally inline) S<int>::foo, but not S<int>, because 13.9.2 [temp.inst] paragraph 10 states:
Except for inline functions, declarations with types deduced from their initializer or return value (9.2.9.6 [dcl.spec.auto]), const variables of literal types, variables of reference types, and class template specializations, explicit instantiation declarations have the effect of suppressing the implicit instantiation of the entity to which they refer.
Additional note (February, 2022):
The paragraph in question was removed by P1815R2 (Translation-unit-local entities) (adopted 2020-02).
EWG 2022-11-11
Any change would require a paper.
[N1065 issue 1.19] An explicit specialization declaration may not be visible during instantiation under the template compilation model rules, even though its existence must be known to perform the instantiation correctly. For example:
translation unit #1
template<class T> struct A { }; export template<class T> void f(T) { A<T> a; }translation unit #2
template<class T> struct A { }; template<> struct A<int> { }; // not visible during instantiation template<class T> void f(T); void g() { f(1); }Rationale: This issue was addressed in the C++11 FDIS and should have been closed.
Is this valid C++? The question is whether a member constant can be specialized. My inclination is to say no.
template <class T> struct A { static const T i = 0; }; template<> const int A<int>::i = 42; int main () { return A<int>::i; }John Spicer: This is ill-formed because 11.4.9.3 [class.static.data] paragraph 4 prohibits an initializer on a definition of a static data member for which an initializer was provided in the class.
The program would be valid if the initializer were removed from the specialization.
Daveed Vandevoorde: Or at least, the specialized member should not be allowed in constant-expressions.
Bill Gibbons: Alternatively, the use of a member constant within the definition could be treated the same as the use of "sizeof(member class)". For example:
template <class T> struct A { static const T i = 1; struct B { char b[100]; }; char x[sizeof(B)]; // specialization can affect array size char y[i]; // specialization can affect array size }; template<> const int A<int>::i = 42; template<> struct A<int>::B { char z[200] }; int main () { A<int> a; return sizeof(a.x) // 200 (unspecialized value is 100) + sizeof(a.y); // 42 (unspecialized value is 1) }For the member template case, the array size "sizeof(B)" cannot be evaluated until the template is instantiated because B might be specialized. Similarly, the array size "i" cannot be evaluated until the template is instantiated.
Rationale (10/99): The Standard is already sufficiently clear on this question.
John Spicer: Certain access checks are suppressed on explicit instantiations. 13.9.3 [temp.explicit] paragraph 8 says:
The usual access checking rules do not apply to names used to specify explicit instantiations. [Note: In particular, the template arguments and names used in the function declarator (including parameter types, return types and exception specifications) may be private types or objects which would normally not be accessible and the template may be a member template or member function which would not normally be accessible. ]I was surprised that similar wording does not exist (that I could find) for explicit specializations. I believe that the two cases should be handled equivalently in the example below (i.e., that the specialization should be permitted).
template <class T> struct C { void f(); void g(); }; template <class T> void C<T>::f(){} template <class T> void C<T>::g(){} class A { class B {}; void f(); }; template void C<A::B>::f(); // okay template <> void C<A::B>::g(); // error - A::B inaccessible void A::f() { C<B> cb; cb.f(); }
Mike Miller: According to the note in 13.4 [temp.arg] paragraph 3,
if the name of a template-argument is accessible at the point where it is used as a template-argument, there is no further access restriction in the resulting instantiation where the corresponding template-parameter name is used.
(Is this specified anywhere in the normative text? Should it be?)
In the absence of text to the contrary, this blanket permission apparently applies to explicitly-specialized templates as well as to implicitly-generated ones (is that right?). If so, I don't see any reason that an explicit instantiation should be treated differently from an explicit specialization, even though the latter involves new program text and the former is just a placement instruction to the implementation.
Proposed Resolution (4/02):
In 13.9.3 [temp.explicit] delete paragraph 8:
The usual access checking rules do not apply to names used to specify explicit instantiations. [Note: In particular, the template arguments and names used in the function declarator (including parameter types, return types and exception specifications) may be private types or objects which would normally not be accessible and the template may be a member template or member function which would not normally be accessible. ]
In 13.9 [temp.spec] add the paragraph deleted above as paragraph 7 with the changes highlighted below:
The usual access checking rules do not apply to names used to specify explicit instantiations or explicit specializations.[Note: In particular, tThe template arguments and names used in the function declarator (including parameter types, return types and exception specifications) may be private types or objects which would normally not be accessible and the template may be a member template or member function which would not normally be accessible.]
Rationale (October 2002):
We reconsidered this and decided that the difference between the two cases (explicit specialization and explicit instantiation) is appropriate. The access rules are sometimes bent when necessary to allow naming something, as in an explicit instantiation, but explicit specialization requires not only naming the entity but also providing a definition somewhere.
The Standard does not describe how to handle an example like the following:
template <class T> int f(T, int); template <class T> int f(int, T); template<> int f<int>(int, int) { /*...*/ }
It is impossible to determine which of the function templates is being specialized. This problem is related to the discussion of issue 229, in which one of the objections raised against partial specialization of function templates is that it is not possible to determine which template is being specialized.
Notes from 10/01 meeting:
It was decided that while this is true, it's not a problem. You can't call such functions anyway; the call would be ambiguous.
It is not clear whether there is any necessary relationship between the type specified in a primary variable template declaration and the type in an explicit or partial specialization. For example:
template<typename T> T var = T(); template<> char var<char> = 'a'; // #1. template<typename T> T* var<T> = new T(); // #2. template<> float var<int> = 1.5; // #3.
Rationale (September, 2013):
CWG affirmed that there is no required relationship between the type of the template and the type of a partial or explicit specialization of that template.
The example in 13.9.4 [temp.expl.spec] paragraph 6 reads, in part,
template<class T> struct A { enum E : T; enum class S : T; }; template<> enum A<int>::E : int { eint }; // OK template<> enum class A<int>::S : int { sint }; // OK template<class T> enum A<T>::E : T { eT }; template<class T> enum class A<T>::S : T { sT }; template<> enum A<char>::E : int { echar }; // ill-formed, A<char>::E was instantiated // when A<char> was instantiated template<> enum class A<char>::S : int { schar }; // OK
The int enum-base in the last two lines appears to be incorrect; the reference to A<char> in the nested-name-specifier will have instantiated the declarations of E and S with an enum-base of char, and the explicit specializations must agree.
Rationale (February, 2014):
The problem was fixed editorially in N3797.
A desire has been expressed for a mechanism to prevent explicitly specializing a given class template, in particular std::initializer_list and perhaps some others in the standard library. It is not clear whether simply adding a prohibition to the description of the templates in the library clauses would be sufficient or whether a core language mechanism is required.
Rationale (June, 2014):
This request for a new language feature should be considered by EWG before any action is taken.
EWG 2022-11-11
The library clauses already prohibit user specializations of standard library templates. This is a request for new feature, which should be proposed in a paper to EWG.
It is not clear whether the following common practice is valid by the current rules:
// foo.h template<typename T> struct X { int f(); // never defined }; // foo.cc #include "foo.h" template<> int X<int>::f() { return 123; } // main.cc #include "foo.h" int main() { return X<int>().f(); }
Relevant rules include Clause 13 [temp] paragraph 6,
A function template, member function of a class template, variable template, or static data member of a class template shall be defined in every translation unit in which it is implicitly instantiated (13.9.2 [temp.inst]) unless the corresponding specialization is explicitly instantiated (13.9.3 [temp.explicit]) in some translation unit; no diagnostic is required.
13.9.2 [temp.inst] paragraph 2,
Unless a member of a class template or a member template has been explicitly instantiated or explicitly specialized, the specialization of the member is implicitly instantiated when the specialization is referenced in a context that requires the member definition to exist...
and 13.9.4 [temp.expl.spec] paragraph 6:
If a template, a member template or a member of a class template is explicitly specialized then that specialization shall be declared before the first use of that specialization that would cause an implicit instantiation to take place, in every translation unit in which such a use occurs; no diagnostic is required. If the program does not provide a definition for an explicit specialization and either the specialization is used in a way that would cause an implicit instantiation to take place or the member is a virtual member function, the program is ill-formed, no diagnostic required. An implicit instantiation is never generated for an explicit specialization that is declared but not defined.
The intent appears to be that the reference in main.cc violates two rules: it implicitly instantiates something for which no definition is provided and that is not explicitly instantiated elsewhere, and it also causes an implicit instantiation of something explicitly specialized in another translation unit without a declaration of the explicit specialization.
Rationale (March, 2016):
As stated in the analysis, the intent is for the example to be ill-formed, no diagnostic required.
Given
template<class T, class U> struct A { }; template<class... T, class ... U> void f( A<T,U>...p); void g() { f<int>( A<int,unsigned>(), A<short,unsigned short>() ); }
I would expect this to work, but all the recent compilers I tried reject it, indicating deduction failure.
Rationale (April, 2013):
This is well-formed.
According to 13.10.2 [temp.arg.explicit] paragraph 9,
Template argument deduction can extend the sequence of template arguments corresponding to a template parameter pack, even when the sequence contains explicitly specified template arguments.
However, it is not clear how to handle an example like:
template<class...> struct Z { Z (int); }; template<class... Ts> void f (Z<Ts...>); int main () { f<void, void> (0); }
Rationale (November, 2014):
CWG was not convinced that such cases are sufficiently useful to warrant the additional complexity in the rules required to support them.
Consider:
struct S { operator int(); }; template<int N> void f(const int (&)[N]); int main() { S s; f<2>({s, s}); // #1 f({s, s}); // #2 }
Since the array element type is not deduced, implicit conversions ought to be permitted in #2 but other implementations disagree. Is there a compelling reason to disallow them?
For comparison:
#include <initializer_list> struct S { operator int(); }; template<typename T> void f(std::initializer_list<T>); int main() { S s; f<int>({s, s}); }
Because T is not deduced, implicit conversions are allowed, and the number of elements in the underlying temporary array is determined from the number of elements in the initializer list. It seems that the intention of issue 1591 was to allow the underlying temporary array to be deduced directly, so the fact that the array bounds are deduced in the case above shouldn't inhibit implicit conversions.
Rationale (November, 2016):
Although there is an argument to be made for the suggested direction, the current rule is simple and easy to explain, so there was no consensus for a change.
Andrei Iltchenko points out that the standard has no wording that defines how to determine which template is specialized by an explicit specialization of a function template. He suggests "template argument deduction in such cases proceeds in the same way as when taking the address of a function template, which is described in 13.10.3.3 [temp.deduct.funcaddr]."
John Spicer points out that the same problem exists for all similar declarations, i.e., friend declarations and explicit instantiation directives. Finding a corresponding placement operator delete may have a similar problem.
John Spicer: There are two aspects of "determining which template" is referred to by a declaration: determining the function template associated with the named specialization, and determining the values of the template arguments of the specialization.
template <class T> void f(T); #1 template <class T> void f(T*); #2 template <> void f(int*);
In other words, which f is being specialized (#1 or #2)? And then, what are the deduced template arguments?
13.7.7.3 [temp.func.order] does say that partial ordering is done in contexts such as this. Is this sufficient, or do we need to say more about the selection of the function template to be selected?
13.10.3 [temp.deduct] probably needs a new section to cover argument deduction for cases like this.
Rationale (February, 2021):
The missing specification was added by C++11; see 13.10.3.7 [temp.deduct.decl].
The Standard currently specifies (9.2.4 [dcl.typedef] paragraph 9, 13.4.2 [temp.arg.type] paragraph 4) that an attempt to create a reference to a reference (via a typedef or template type parameter) is effectively ignored. The same is not true of an attempt to form a pointer to a reference; that is, assuming that T is specified to be a reference type,
template <typename T> void f(T t) { T& tr = t; // OK T* tp = &t; // error }
It would be more consistent to allow pointers to references to collapse in the same way that references to references do.
Rationale (February, 2008):
In the absence of a compelling need, the CWG felt that it was better not to change the existing rules. Allowing this case could cause a quiet change to the meaning of a program, because attempting to create a pointer to a reference type is currently a deduction failure.
Additional discussion (May, 2009):
Consider the following slightly extended version of the example above:
template <typename T> void f(T t) { T& tr = t; // OK T* tp = &t; // error auto * ap = &t; // OK! }
This means that a template that expects a reference type will need to use auto just to work around the failure to collapse pointer-to-reference types. The result might, in fact, be subtly different with auto, as well, in case there is an overloaded operator& function that doesn't return exactly T*. This contradicts one of the main goals of C++0x, to make it simpler, more consistent, and easier to teach.
Rationale (July, 2009):
The CWG reaffirmed its early decision. In general, templates will need to be written differently for reference and non-reference type parameters. Also, the Standard library provides a facility, std::remove_reference, that can be used easily for such cases.
With the resolution of issue 1170, which takes access into account in template argument deduction, it is now possible to have instantiation-dependent expressions (see issue 1172) that do not directly involve a template parameter. For example:
template <class T> struct C; class A { int i; friend struct C<int>; } a; class B { int i; friend struct C<float>; } b; template <class T> struct C { template <class U> decltype (a.i) f() { } // #1 template <class U> decltype (b.i) f() { } // #2 }; int main() { C<int>().f<int>(); // calls #1 C<float>().f<float>(); // calls #2 }
Rationale (August, 2011):
The specification is as intended. To the extent that there is an issue here, it is covered by issue 1172.
Given an example like
template <class T> void f (T, int = T()); template <class T> auto g(T t) -> decltype (f(t)); void g(int); struct A { A(int); operator int(); }; int main() { g(A(42)); }
it seems that since the default argument is treated as a separate template, its ill-formedness causes a hard error, rather than a substitution failure for g. Is this what we want?
Rationale (October, 2012):
CWG felt that this was acceptable; also, there is discussion in EWG regarding changes to the SFINAE rules that could affect this case.
Consider:
template <class T> void f(T&); template <class T> void f(const T&); void m() { const int p = 0; f(p); }Some compilers treat this as ambiguous; others prefer f(const T&). The question turns out to revolve around whether 13.10.3.2 [temp.deduct.call] paragraph 2 says what it ought to regarding the removal of cv-qualifiers and reference modifiers from template function parameters in doing type deduction.
John Spicer: The partial ordering rules as originally proposed specified that, for purposes of comparing parameter types, you remove a top level reference, and after having done that you remove top level qualifiers. This is not what is actually in the IS however. The IS says that you remove top level qualifiers and then top level references.
The original rules were intended to prefer f(A<T>) over f(const T&).
Rationale (10/99): The Standard correctly reflects the intent of the Committee.
(October 2002) This is resolved by issue 214.
In the following example,
template<typename T> void f(const T&); // #1 template<typename T> void f(T&&); // #2 void g() { const int x = 5; f(x); }
the call f(x) is ambiguous by the current rules. For #1, T is deduced as int, giving
f<int>(const int&)
For #2, because of the special case for T&& in 13.10.3.2 [temp.deduct.call] paragraph 3, T is deduced as const int&; application of the reference-collapsing rules in 9.3.4.3 [dcl.ref] paragraph 6 to the substituted parameter type yields
f<const int&>(const int&)
These are indistinguishable in overload resolution, resulting in an ambiguity. It's not clear how this might be addressed.
Rationale (August, 2010):
The two functions are distinguished by partial ordering, so the call is not actually ambiguous.
Simple pointer:
Template argument: const T*
Actual argument: char*
We deduce T to be char, and by compatibility of
cv-qualifiers, this deduction works.
Now suppose that (somehow) we deduced T to be const char. We fold the resulting const const char* into const char*, and again the deduction works. Does the standard disallow this deduction? The reason for this question is in the next example.
Pointer to member:
Template argument: const T A<T>::*, where A is a class template
Actual argument: char A<const char>::*
Compilers reject this case, because if we deduce the first T to
be char, we cannot match A<T>::*. But suppose we
look first at the second T, deducing it to be const
char. Then we get const char A<const char>::*, which is
OK. Alternatively, as in the hypothetical case in example 1, suppose we
deduce the first T to be const char, again we get a
match.
Arbitrarily adding extra cv-qualifiers in looking for a match, or trying different matching orders to find one that works, seems wrong. But are these approaches disallowed?
For completeness, here is a code example:
template <typename Q>
struct A {
int i;
};
template <typename T>
void foo(const T A<T>::*) { }
int main() {
A<const int> a;
int A<const int>::*p = &A<const int>::i;
foo(p); // rejected by all compilers, but why?
}
Rationale (September, 2013):
There are no rules that would result in deducing const char in the specified example.
An rvalue reference type involving a template parameter receives special treatment in template argument deduction in 13.10.3.2 [temp.deduct.call] paragraph 3:
If P is an rvalue reference to a cv-unqualified template parameter and the argument is an lvalue, the type “lvalue reference to A” is used in place of A for type deduction.
Does this rule apply when the parameter type involves an alias template instead of using a template parameter directly? For example:
template<class T> using RR = T&&;
template<class T> struct X {};
template<class T> struct X<T&&>; // Leave incomplete to possibly trigger an error
template<class T> void g(RR<T> p) {
X<decltype(p)> x;
}
int main() {
int x = 2;
g(x);
}
There is implementation variance on the treatment of this example.
Additional note (October, 2013):
It was observed that the type of the parameter would be the same whether written as T&& or as RR<T>, which would require that deduction be performed the same, regardless of how the type was written.
Rationale (September, 2013):
Because the types of the function parameters are the same, regardless of whether written directly or via an alias template, deduction must be handled the same way in both cases.
Consider the following example:
template <typename T> void f(T** p, void (*)()); // #1 template <typename T> void f(T* p, void (&)()); // #2 void x(); void g(int** p) { f(p, x); // #3 }
The question is whether the call at #3 is ambiguous or not.
The synthesized declarations for overload resolution are:
void f<int>(int**, void(*)()); // From #1 void f<int*>(int**, void(&)()); // From #2
Neither of these is a better match on the basis of conversion sequences (the function-to-pointer conversion and the reference binding have “exact match” rank), and both are function template specializations, so the tiebreaker in 12.2.4 [over.match.best] paragraph 1 comes down to whether #1 is more specialized than #2 or vice versa.
The determination of whether one of these templates is more specialized than the other is done (as described in 13.7.7.3 [temp.func.order]) by synthesizing a type for the template parameter of each function template (call them @1 and @2, respectively), substituting that synthesized type for each occurrence of the template parameter in the function type of the template, and then performing deduction on each pair of corresponding function parameters as described in 13.10.3.5 [temp.deduct.partial].
For the first function parameter, #1 is more specialized: deduction succeeds with P=T* and A=@1**, giving T=@1*, but it fails with P=T** and A=@2*. For the second parameter, deduction fails in both directions, with P=void(*)() and A=void() as well as with P=void() and A=void(*)() (the reference is dropped from both the parameter and argument types, as described in 13.10.3.5 [temp.deduct.partial] paragraph 5). This means that neither parameter type is at least as specialized as the other (paragraph 8).
According to 13.10.3.5 [temp.deduct.partial] paragraph 10,
If for each type being considered a given template is at least as specialized for all types and more specialized for some set of types and the other template is not more specialized for any types or is not at least as specialized for any types, then the given template is more specialized than the other template. Otherwise, neither template is more specialized than the other.
According to this rule, #1 is not more specialized than #2 because it is not “at least as specialized” for the second parameter type, so the call at #3 is ambiguous.
Results vary among implementations, with some rejecting the call as ambiguous and others resolving it to #1.
Would it be better to say that a function template F1 is more specialized than F2 if at least one of F1's types is more specialized than the corresponding F2 type and none of F2's types is more specialized than the corresponding F1 type? That would be simpler and, for examples like this, arguably more intuitive. The rationale for this change would be that if, for a given parameter pair, neither is more specialized than the other, then that parameter pair simply says nothing about whether one of the templates is more specialized than the other, rather than indicating that the templates cannot be ordered.
(See also issue 455.)
Rationale (October, 2009):
The consensus of the CWG is that this issue, in which corresponding parameters cannot be compared but the functions are equivalent in terms of overload resolution, arises so infrequently in practice that no change is warranted at this time.
Consider an example like:
template<typename T> struct A { A(const T&); // #1 A(T&&); // #2 }; template<typename U> A(U&&)->A<double>; // #3 int main(){ int i =0; const int ci =0; A a1(0); A a2(i); A a3(ci); }
This example is covered by 13.10.3.5 [temp.deduct.partial] paragraph 9:
If, for a given type, deduction succeeds in both directions (i.e., the types are identical after the transformations above) and both P and A were reference types (before being replaced with the type referred to above):
if the type from the argument template was an lvalue reference and the type from the parameter template was not, the parameter type is not considered to be at least as specialized as the argument type; otherwise,
if the type from the argument template is more cv-qualified than the type from the parameter template (as described above), the parameter type is not considered to be at least as specialized as the argument type.
For a2(i), the deduction guide is the best match, so this is an A<double>.
For a3(ci), the first bullet applies, which prefers #1 to #3 since #1 comes from an lvalue reference and #3 does not, resulting in an A<int>.
For a1(0), the case is not covered by partial ordering, so 12.2.4 [over.match.best] bullet 1.10 applies and prefers #3 to #2, which is again an A<double>.
It seems inconsistent to prefer #1 to #3 (T const & to U&&), but to prefer #3 to #2 (U&& to T&&). Should the rules be expanded to basically prefer any non-forwarding-reference to a forwarding reference?
Rationale (June, 2018):
There was no consensus to make a change at this point; the behavior is as intended.
Consider the following:
template <typename T> struct X {}; // #1 template <typename T> struct X<const T>; //#2 template struct X<int&>; //#3
Which specialization are we instantiating in #3? The "obvious" answer is #1, because "int&" doesn't have a top level cv-qualification. However, there's also an argument saying that we should actually be instantiating #2. The argument is: int& can be taken as a match for either one (top-level cv-qualifiers are ignored on references, so they're equally good), and given two equally good matches we must choose the more specialized one.
Is this a valid argument? If so, is this behavior intentional?
John Spicer: I don't see the rationale for any choice other than #1. While it is true that if you attempt to apply const to a reference type it just gets dropped, that is very different from saying that a reference type is acceptable where a const-qualified type is required.
If the type matched both templates, the const one would be more specialized, but "int&" does not match "const T".
Nathan Sidwell: thanks for bringing this one to the committee. However this is resolved, I'd like clarification on the followup questions in the gcc bug report regarding deduced and non-deduced contexts and function templates. Here're those questions for y'all,
template <typename T> void Foo (T *); // #1 template <typename T> void Foo (T const *); // #2 void Baz (); Foo (Baz); // which? template <typename T> T const *Foo (T *); // #1 void Baz (); Foo (Baz); // well formed? template <typename T> void Foo (T *, T const * = 0); void Baz (); Foo (Baz); // well formed?
BTW, I didn't go trying to break things, I implemented the cv-qualifier ignoring requirements and fell over this. I could find nothing in the standard saying 'don't do this ignoring during deduction'.
Issue 226 removed the original prohibition on default template-arguments for function templates. However, the note in 13.10.3.6 [temp.deduct.type] paragraph 19 still reflects that prohibition. It should be revised or removed.
Rationale (February, 2010):
The problematic note was removed as a by-product of the changes for variadic templates, document N2284.
The special rule in 13.10.3.6 [temp.deduct.type] paragraph 10 for handling T&& in template argument deduction applies only to function parameters. It also needs to apply to function return types (including for conversion function templates, 13.10.3.4 [temp.deduct.conv]).
Rationale (February, 2012):
The specification is as intended: the special treatment of lvalue arguments in deduction is to make “perfect forwarding” work and should not be applied in other contexts.
It would be useful to be able to deduce the type of a function template argument from a corresponding default function argument expression, for example:
template <class T> int f(T = 0); int x = f();
A more realistic use case would be
template <class T, class U> int f(T x, U y = pair<T, T>());
Ideally one would also like
template <class T, class U> int f(T x, U y = g(x));
These capabilities are part of the Boost parameter library, so there should not be issues of implementability.
Rationale (February, 2014):
EWG determined that no action should be taken on this issue.
The grammar should be changed so that constructor function-try-blocks must end with a throw-expression.
Rationale (04/00):
No change is needed. It is already the case that flowing off the end of a function-try-block of a constructor rethrows the exception, and return statements are prohibited in constructor function-try-blocks (14.4 [except.handle] paragraphs 15-16.
Questions regarding when a throw-expression temporary object is destroyed.
Section 14.2 [except.throw] paragraph 4 describes when the temporary is destroyed when a handler is found. But what if no handler is found:
struct A { A() { printf ("A() \n"); } A(const A&) { printf ("A(const A&)\n"); } ~A() { printf ("~A() \n"); } }; void t() { exit(0); } int main() { std::set_terminate(t); throw A(); }Does A::~A() ever execute here? (Or, in case two constructions are done, are there two destructions done?) Is it implementation-defined, analogously to whether the stack is unwound before terminate() is called (14.4 [except.handle] paragraph 9) ?
Or what if an exception specification is violated? There are several different scenarios here:
int glob = 0; // or 1 or 2 or 3 struct A { A() { printf ("A() \n"); } A(const A&) { printf ("A(const A&)\n"); } ~A() { printf ("~A() \n"); } }; void u() { switch (glob) { case 0: exit(0); case 1: throw "ok"; case 2: throw 17; default: throw; } } void foo() throw(const char*, std::bad_exception) { throw A(); } int main() { std::set_unexpected(u); try { foo(); } catch (const char*) { printf("in handler 1\n"); } catch (std::bad_exception) { printf("in handler 2\n"); } }The case where u() exits is presumably similar to the terminate() case. But in the cases where the program goes on, A::~A() should be called for the thrown object at some point. But where does this happen? The standard doesn't really say. Since an exception is defined to be "finished" when the unexpected() function exits, it seems to me that is where A::~A() should be called — in this case, as the throws of new (or what will become new) exceptions are made out of u(). Does this make sense?
Rationale (10/99): Neither std::exit(int) nor std::abort() destroys temporary objects, so the exception temporary is not destroyed when no handler is found. The original exception object is destroyed when it is replaced by an unexpected() handler. The Standard is sufficiently clear on these points.
According to 14.2 [except.throw] paragraph 7,
If the exception handling mechanism, after completing the initialization of the exception object but before the activation of a handler for the exception, calls a function that exits via an exception, std::terminate is called (14.6.2 [except.terminate]).
This could be read as indicating that the following example results in calling std::terminate:
// function that exits via an exception void f() { // std::uncaught_exception() returns true here throw 0; } struct X { ~X() { // calls a function that exits via an exception try { f(); } catch( ... ) { } } }; int main() { try { X x; throw 0; // calls X's destructor while exception is still uncaught. } catch( ... ) { } }
This seems undesirable, and current implementations do not call std::terminate. Presumably the intention is that the cited text only applies to functions that are called directly by the exception handling mechanism, which is not true of f() in this example, but this could be stated more clearly.
Rationale (September, 2013):
The intent of the wording in 14.2 [except.throw] paragraph 7 is to call std::terminate if an exception is propagated into the exception-handling mechanism; “If the exception handling mechanism... calls a function that exits via an exception” is thus intended to refer to functions that are directly called by the exception handling mechanism. In the given example, f() is not called by the exception handling mechanism, it is called by X::~X(). The exception handling mechanism calls X::~X(), but it does not exit via an exception, so std::terminate should not be called.
According to 14.3 [except.ctor] paragraph 2,
An object of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed for all of its fully constructed subobjects (excluding the variant members of a union-like class)...
The restriction for variant members does not appear to be necessary when there is a mem-initializer or non-static data member initializer for a union member, as that determines which union member is active during the execution of the constructor.
Rationale (April, 2013):
Although the active member in a member union is determined by an initializer, it could change during the execution of the constructor's compound=statement.
[Detailed description pending.]
Rationale (November, 2016):
The reported issue is no longer relevant after the adoption of paper P0490R0 at the November, 2016 meeting.
[Detailed description pending.]
Rationale (November, 2016):
The reported issue is no longer relevant after the adoption of paper P0490R0 at the November, 2016 meeting.
14.4 [except.handle] paragraph 3 contains the following text:
A handler is a match for a throw-expression with an object of type E if
- The handler is of type cv T or cv T& and E and T are the same type (ignoring the top-level cv- qualifiers), or
- the handler is of type cv T or cv T& and T is an unambiguous public base class of E, or
- the handler is of type cv1 T* cv2 and E is a pointer type that can be converted to the type of the handler by either or both of
- a standard pointer conversion (7.3.12 [conv.ptr]) not involving conversions to pointers to private or protected or ambiguous classes
- a qualification conversion
I propose to alter this text to allow to catch exceptions with ambiguous public base classes by some of the public subobjects. I'm really sure that if someone writes:
try { // ... } catch (Matherr& m) { // ... }he really wants to catch all Matherrs rather than to allow some of the Matherrs to escape:
class SomeMatherr : public Matherr { /* */ }; struct TrickyBase1 : public SomeMatherr { /* */ }; struct TrickyBase2 : public SomeMatherr { /* */ }; struct TrickyMatherr : public TrickyBase1, TrickyBase2 { /* */ };
According to the standard TrickyMatherr will leak through the catch (Matherr& m) clause. For example:
#include <stdio.h> struct B {}; struct B1 : B {}; struct B2 : B {}; struct D : B1, B2 {}; // D() has two B() subobjects void f() { throw D(); } int main() { try { f(); } catch (B& b) { puts("B&"); } // passed catch (D& d) { puts("D&"); } // really works _after_ B&!!! }
Also I see one more possible solution: to forbid objects with ambiguous base classes to be "exceptional objects" (for example Borland C++ goes this way) but it seems to be unnecessary restrictive.
Notes from the 10/01 meeting:
The Core Working Group did not feel this was a significant problem. Catching either of the ambiguous base classes would be surprising, and giving an error on throwing an object that has an ambiguous base class would break existing code.
If control reaches the end of handler in a destructor's function-try-block, the exception is rethrown (14.4 [except.handle] paragraph 15). Because of the danger of destructors that throw exceptions, would it be better to treat this case as an implicit return; statement, as in a function body? There could be a transitional period, perhaps using conditionally-supported behavior or the like, before mandating the change.
Rationale (October, 2006):
The CWG felt that the current behavior is clearly specified and reflects the intention of the Committee at the time the rules were adopted. Possible changes to these rules should be pursued through the Evolution Working Group.
14.5 [except.spec] paragraph 13 contains the following text. I believe 'implicitLY' marked below should be replaced with 'implicit.'
An implicitly declared special member function (11.4.4 [special]) shall have an exception-specification. If f is an implicitly declared default constructor, copy constructor, destructor, or copy assignment operator, its implicit exception-specification specifies the type-id T if and only if T is allowed by the exception-specification of a function directly invoked by f's implicitly definition; f shall allow all exceptions if any function it directly invokes allows all exceptions, and f shall allow no exceptions if every function it directly invokes allows no exceptions. [Example:
struct A { A(); A(const A&) throw(); ~A() throw(X); }; struct B { B() throw(); B(const B&) throw(); ~B() throw(Y); }; struct D : public A, public B { // Implicit declaration of D::D(); // Implicit declaration of D::D(const D&) throw(); // Implicit declaration of D::~D() throw (X,Y); };Furthermore, if A::~A() or B::~B() were virtual, D::~D() would not be as restrictive as that of A::~A, and the program would be ill-formed since a function that overrides a virtual function from a base class shall have an exception-specification at least as restrictive as that in the base class. ]
The example code shows structs whose destructors have exception specifications which throw certain types. There is no defect here, but it doesn't sit well with our general advice elsewhere that destructors should not throw. I wish I could think of some other way to illustrate this section.
Notes from October 2002 meeting:
This was previously resolved by an editorial change.
It is unclear whether std::unexpected is called before or after the destruction of function arguments, partially-constructed bases and members (when called from a constructor or destructor), etc.
Rationale (October, 2009):
The point at which std::unexpected is called is specified in _N4606_.15.5.2 [except.unexpected] paragraph 1:
If a function with an exception-specification throws an exception that is not listed in the exception-specification, the function std::unexpected() is called (_N4606_.D.6 [exception.unexpected]) immediately after completing the stack unwinding for the former function.
That is, it will be called after any local automatic objects and temporaries are destroyed and before any other objects, such as function arguments, are destroyed. (See 7.6.1.3 [expr.call] paragraph 4: “The initialization and destruction of each parameter occurs within the context of the calling function.”)
The consensus at the Pittsburgh (March, 2010) meeting, as reflected in the adoption of paper N3050, was that it was preferable for violation of a noexcept guarantee to call std::terminate; previous versions of the paper had called for undefined behavior in this case. Not everyone was convinced that this was a good decision, however; this issue is intended to facilitate further investigation and discussion of the question with the benefit of more time and resources than were available during the deliberations at the meeting.
Rationale (August, 2010):
CWG reaffirmed the explicit decision of the Committee.
Although 14.5 [except.spec] paragraphs 5-6 require that overriding a virtual function and initializing or assigning to a function pointer not weaken exception-specifications, the same is not true of providing a template argument for a template parameter. For example,
template<void (*FP)() noexcept> void x() { } void f() noexcept(false); template void x<f>();
is currently well-formed, which seems inconsistent. (Note that if exception-specifications become part of the type system, as proposed in issue 92, this issue will become moot.)
See also issues 2010, 1995, 1975, and 1946.
Rationale (February, 2014):
Are template declarations that differ only in the exception-specification of the parameter redeclarations or separate templates distinguished, presumably, by deduction failure? This seems like a question more appropriate for consideration by EWG.
Additional note, April, 2015:
EWG has decided not to make a change in this area. See EWG issue 133.
According to 14.5 [except.spec] paragraph 4,
If any declaration of a function has an exception-specification that is not a noexcept-specification allowing all exceptions, all declarations, including the definition and any explicit specialization, of that function shall have a compatible exception-specification.
This seems excessive for explicit specializations, considering that paragraph 6 applies a looser requirement for virtual functions:
If a virtual function has an exception-specification, all declarations, including the definition, of any function that overrides that virtual function in any derived class shall only allow exceptions that are allowed by the exception-specification of the base class virtual function.
The rule in paragraph 3 is also problematic in regard to explicit specializations of destructors and defaulted special member functions, as the implicit exception-specification of the template member function cannot be determined.
There is also a related problem with defaulted special member functions and exception-specifications. According to 9.5.2 [dcl.fct.def.default] paragraph 3,
If a function that is explicitly defaulted has an explicit exception-specification that is not compatible (14.5 [except.spec]) with the exception-specification on the implicit declaration, then
if the function is explicitly defaulted on its first declaration, it is defined as deleted;
otherwise, the program is ill-formed.
This rule precludes defaulting a virtual base class destructor or copy/move functions if the derived class function will throw an exception not allowed by the implicit base class member function.
This request for a language extension should be evaluated by EWG before any action is taken.
Additional note, November, 2020:
This request applied to full exception specifications and is no longer relevant in the current language, where only noexcept-specifiers are permitted.
EWG 2022-11-11
Close as NAD.
The description of the “set of potential exceptions of an expression” in 14.5 [except.spec] paragraph 15 does not appear to be fully recursive, so it can miss the effect of, e.g., a throw-expression as a subexpression. In addition, bullet 15.1.1, which reads,
If its postfix-expression is a (possibly parenthesized) id-expression (_N4567_.5.1.1 [expr.prim.general]), class member access (7.6.1.5 [expr.ref]), or pointer-to-member operation (7.6.4 [expr.mptr.oper]) whose cast-expression is an id-expression, S is the set of potential exceptions of the entity selected by the contained id-expression (after overload resolution, if applicable).
omits the case where the postfix-expression is a function call whose return type is a function pointer with an exception specification.
Notes from the June, 2016 meeting:
This text will be replaced by the removal of dynamic exception specifications (P0003) and thus does not need to be changed at this time. The issue is placed in "review" status until document P0003 is adopted.
Rationale (February, 2017):
The issue is moot after the adoption of paper P0003.
Consider:
#include <type_traits> template<class T> T foo(T) noexcept(std::is_nothrow_move_constructible<T>::value); int main() { sizeof(foo(0)); }According to 14.5 [except.spec] paragraph 13:
An exception specification is considered to be needed when:
- in an expression, the function is selected by overload resolution (12.2 [over.match], 12.3 [over.over]);
- ...
Is it intended that the exception specification is needed for the example? The function call is never evaluated and the exception specification is not queried.
Rationale (November, 2016):
The type of the function is needed to know how to call it, and the exception specification is part of the function type.
Destructors that throw can easily cause programs to terminate, with no possible defense. Example: Given
struct XY { X x; Y y; };
Assume that X::~X() is the only destructor in the entire program that can throw. Assume further that Y construction is the only other operation in the whole program that can throw. Then XY cannot be used safely, in any context whatsoever, period — even simply declaring an XY object can crash the program:
XY xy; // construction attempt might terminate program: // 1. construct x -- succeeds // 2. construct y -- fails, throws exception // 3. clean up by destroying x -- fails, throws exception, // but an exception is already active, so call // std::terminate() (oops) // there is no defenseSo it is highly dangerous to have even one destructor that could throw.
Suggested Resolution:
Fix the above problem in one of the following two ways. I prefer the first.
Fergus Henderson: I disagree. Code using XY may well be safe, if X::~X() only throws if std::uncaught_exception() is false.
I think the current exception handling scheme in C++ is certainly flawed, but the flaws are IMHO design flaws, not minor technical defects, and I don't think they can be solved by minor tweaks to the existing design. I think that at this point it is probably better to keep the standard stable, and learn to live with the existing flaws, rather than trying to solve them via TC.
Bjarne Stroustrup: I strongly prefer to have the call to std::terminate() be conforming. I see std::terminate() as a proper way to blow away "the current mess" and get to the next level of error handling. I do not want that escape to be non-conforming — that would imply that programs relying on a error handling based on serious errors being handled by terminating a process (which happens to be a C++ program) in std::terminate() becomes non-conforming. In many systems, there are — and/or should be — error-handling and recovery mechanisms beyond what is offered by a single C++ program.
Andy Koenig: If we were to prohibit writing a destructor that can throw, how would I solve the following problem?
I want to write a class that does buffered output. Among the other properties of that class is that destroying an object of that class writes the last buffer on the output device before freeing memory.
What should my class do if writing that last buffer indicates a hardware output error? My user had the option to flush the last buffer explicitly before destroying the object, but didn't do so, and therefore did not anticipate such a problem. Unfortunately, the problem happened anyway. Should I be required to suppress this error indication anyway? In all cases?
Herb Sutter (June, 2007): IMO, it's fine to suppress it. The user had the option of flushing the buffer and thus being notified of the problem and chose not to use it. If the caller didn't flush, then likely the caller isn't ready for an exception from the destructor, either. You could also put an assert into the destructor that would trigger if flush() had not been called, to force callers to use the interface that would report the error.
In practice, I would rather thrown an exception, even at the risk of crashing the program if we happen to be in the middle of stack unwinding. The reason is that the program would crash only if a hardware error occurred in the middle of cleaning up from some other error that was in the process of being handled. I would rather have such a bizarre coincidence cause a crash, which stands a chance of being diagnosed later, than to be ignored entirely and leave the system in a state where the ignore error could cause other trouble later that is even harder to diagnose.
If I'm not allowed to throw an exception when I detect this problem, what are my options?
Herb Sutter: I understand that some people might feel that "a failed dtor during stack unwinding is preferable in certain cases" (e.g., when recovery can be done beyond the scope of the program), but the problem is "says who?" It is the application program that should be able to decide whether or not such semantics are correct for it, and the problem here is that with the status quo a program cannot defend itself against a std::terminate() — period. The lower-level code makes the decision for everyone. In the original example, the mere existence of an XY object puts at risk every program that uses it, whether std::terminate() makes sense for that program or not, and there is no way for a program to protect itself.
That the "it's okay if the process goes south should a rare combination of things happen" decision should be made by lower-level code (e.g., X dtor) for all apps that use it, and which doesn't even understand the context of any of the hundreds of apps that use it, just cannot be correct.
Additional note (April, 2011):
The addition of the noexcept specifier, along with changes to make many destructors noexcept by default, may have sufficiently addressed these concerns. CWG should consider changing this to NAD or extension status.
Rationale (August, 2011):
As given in the preceding note.
The term "throw exception" seems to sometimes refer to an expression of the form "throw expr" and sometimes just to the "expr" portion thereof.
As a result it is not quite clear to me whether when "uncaught_exception()" becomes true: before or after the temporary copy of the value of "expr".
Is there a definite consensus about that?
Rationale: The standard is sufficiently clear; the phrase "to be thrown" indicates that the throw itself (which includes the copy to the temporary object) has not yet begun. The footnote in 14.6.2 [except.terminate] paragraph 1 reinforces this ordering.
See also issue 475.
With the adoption of paper N4259 specifying the std::uncaught_exceptions() function, the std::uncaught_exception() function should be deprecated.
Rationale (May, 2015):
This has already been done; see _N4140_.D.9 [depr.uncaught].
In language imported directly from the C Standard, 15.3 [cpp.include] paragraph 5 says,
The implementation provides unique mappings for sequences consisting of one or more nondigits (5.10 [lex.name]) followed by a period (.) and a single nondigit.
This is clearly intended to support C header names like stdio.h. However, C++ has header names like cstdio that do not conform to this pattern but still presumably require “unique mappings.”
Proposed resolution (April, 2006):
Change 15.3 [cpp.include] paragraph 5 as indicated:
The implementation provides unique mappings between the delimited sequence and the external source file name for sequences consisting of one or more nondigits or digits (5.10 [lex.name]), optionally followed by a period (.) and a single nondigit...
(Clark Nelson will discuss this revision with WG14.)
Additional notes (October, 2006):
WG14 takes no position on this proposed change.
Rationale (September, 2008):
It is unclear what effect the provision of “unique mappings” has or if a conforming program could detect the failure of an implementation to do so. There has been a significant effort to synchronize this clause with the corresponding section of the C99 Standard, and given the lack of perceptible impact of the proposed change, there is insufficient motivation to introduce a new divergence in the wording.
According to 15.3 [cpp.include] paragraph 4,
A preprocessing directive of the form
# include pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after include in the directive are processed just as in normal text (Each identifier currently defined as a macro name is replaced by its replacement list of preprocessing tokens.). If the directive resulting after all replacements does not match one of the two previous forms, the behavior is undefined.155 The method by which a sequence of preprocessing tokens between a < and a > preprocessing token pair or a pair of " characters is combined into a single header name preprocessing token is implementation-defined.
It might be inferred from the phrase “in the directive” that only tokens before the terminating newline would be available for macro expansion, and that consequently the closing right parenthesis of a function-style macro must appear on the same line. However, it would be clearer if it used language like that of 15.6.2 [cpp.subst] paragraph 1:
each argument's preprocessing tokens are completely macro replaced as if they formed the rest of the preprocessing file; no other preprocessing tokens are available.
Rationale (September, 2013):
The wording referring to preprocessing tokens “in the directive” is a clear enough indication that no tokens after the terminating newline are considered.
Is numeric_limits<int>::radix required to be 2? 17.3.5.2 [numeric.limits.members] paragraph 23 specifies:
static constexpr int radix;
For integer types, specifies the base of the representation.
Rationale (November, 2016):
CWG felt that the current specification is sufficiently clear and there was no consensus for a change.
The definition of intmax_t and uintmax_t, inherited from C99, leaves open the possibility that the underlying types might not be the ones with the highest integer conversion rank. The requirements for these types deal only with the representation, not the conversion rank, and it is possible for, e.g., long and long long to have the same representation, although they have different conversion ranks. On such an architecture, chosing long instead of long long for intmax_t would be conforming.
Rationale (August, 2011):
This is a C compatibility issue and has ABI implications; there was no consensus to pursue a change.
Paper N3778 added the following two deallocation signatures to the standard library:
void operator delete(void* ptr, std::size_t size, const std::nothrow_t&) noexcept; void operator delete[](void* ptr, std::size_t size, const std::nothrow_t&) noexcept;
The core language does not currently provide for calling these functions; they could only be called as the matching deallocation function when a constructor throws an exception, but the rules for determining the matching deallocation function do not consider the existence of the sized-deallocation variants.
Rationale (November, 2014):
CWG agreed that the performance gain in using the sized-deallocation variants when a constructor throws an exception would be insignificant compared to the cost of the exception handling itself and thus insufficient motivation for changing the core language. The issue was referred to LWG for their consideration regarding removal of these signatures.
The grammar in Appendix A does not indicate a grammar sentence symbol and is therefore formally not a grammar.
Rationale (04/01):
Appendix A does not claim to be a formal grammar. The specification is clear enough in its current formulation.
9.3.4.5 [dcl.array] paragraph 1 says,
The expression is erroneous if:
...
its value is such that the size of the allocated object would exceed the implementation-defined limit (Annex Clause Annex B [implimits]);
...
The only relevant limit in Clause Annex B [implimits] is that of the size of an object, but presumably an implementation might want to impose a smaller limit on a stack-based object. This separate quantity is referred to in paragraph 4 when describing an array of unspecified bound:
If the size of the array exceeds the size of the memory available for objects with automatic storage duration, the behavior is undefined.
but perhaps it needs to be mentioned in Clause Annex B [implimits] as well.
Proposed resolution (September, 2013):
This issue is resolved by the resolution of issue 1761.
Rationale (February, 2014):
The specification was removed from the WP and moved into a Technical Specification.
Notes from the February, 2014 meeting:
CWG discussed adding such a limit, even without the changes for arrays of runtime bound, but decided that it was unneeded; such handling could be added by implementations if desirable.
Should there be an entry in Annex Clause Annex B [implimits] for the minimum number of elements an implementation should accept in an initializer-list, and if so, what should that be?
Rationale (June, 2014):
There are already related limits in Annex Clause Annex B [implimits] (array bounds, object size), which should be sufficient.
Annex C lists C compatibility issues. One item not in the annex came up in a discussion in comp.std.c++.
Consider this C and C++ code:
const int j = 0; char* p = (char*)j;
Rationale (10/99): Because j is not a constant expression in C, this code fragment has implementation-defined behavior in C. There is no incompatibility with C resulting from the fact that C++ defines this behavior.
The treatment of character literals containing universal-character-names is not clear. It is reasonable to conclude from 5.13.5 [lex.string] paragraph 15 that if a character named by a UCN cannot be represented by a single character in the runtime character set, it becomes a multibyte character and thus such a character literal is a multicharacter literal, with type int and an implementation-defined value. It would be nice if 5.13.3 [lex.ccon] had the complete story by itself or at least a reference to 5.13.5 [lex.string] for the details.
Rationale (February, 2012):
This issue is a duplicate of issue 912.
The wording in 6.3 [basic.def.odr] paragraph 2 about "potentially evaluated" is incomplete. It does not distinguish between expressions which are used as "integral constant expressions" and those which are not; nor does it distinguish between uses in which an objects address is taken and those in which it is not. (A suitable definition of "address taken" could be written without actually saying "address".)
Currently the definition of "use" has two parts (part (a) and (d) below); but in practice there are two more kinds of "use" as in (b) and (c):
I don't think we discussed (c).
Rationale (04/99): The substantive part of this issue is covered by Core issue 48
Given the following test case:
enum E { e1, e2, e3 }; void f(int, E e = e1); void f(E, E e = e1); void g() { void f(long, E e = e2); f(1); // calls ::f(int, E) f(e1); // ? }First note that Koenig lookup breaks the concept of hiding functions through local extern declarations as illustrated by the call `f(1)'. Should the WP show this as an example?
Second, it appears the WP is silent as to what happens with the call `f(e1)': do the different default arguments create an ambiguity? is the local choice preferred? or the global?
Tentative Resolution (10/98) In 6.5.4 [basic.lookup.argdep] paragraph 2, change
If the ordinary unqualified lookup of the name finds the declaration of a class member function, the associated namespaces and classes are not considered.to
If the ordinary unqualified lookup of the name finds the declaration of a class member function or the declaration of a function at block scope, the associated namespaces and classes are not considered.
Rationale (04/99): The proposal would also apply to local using-declarations (per Mike Ball) and was therefore deemed undesirable. The ambiguity issue is dealt with in Core issue 1
The last bullet of the second paragraph of section 6.5.4 [basic.lookup.argdep] says that:
If T is a template-id, its associated namespaces and classes are the namespace in which the template is defined; for member templates, the member template's class; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which any template template arguments are defined; and the classes in which any member templates used as template template arguments are defined.
The first problem with this wording is that it is misleading, since one cannot get such a function argument whose type would be a template-id. The bullet should be speaking about template specializations instead.
The second problem is owing to the use of the word "defined" in the phrases "are the namespace in which the template is defined", "in which any template template arguments are defined", and "as template template arguments are defined". The bullet should use the word "declared" instead, since scenarios like the one below are possible:
namespace A { template<class T> struct test { template<class U> struct mem_templ { }; }; // declaration in namespace 'A' template<> template<> struct test<int>::mem_templ<int>; void foo(test<int>::mem_templ<int>&) { } } // definition in the global namespace template<> template<> struct A::test<int>::mem_templ<int> { }; int main() { A::test<int>::mem_templ<int> inst; // According to the current definition of 3.4.2 // foo is not found. foo(inst); }
In addition, the bullet doesn't make it clear whether a T which is a class template specialization must also be treated as a class type, i.e. if the contents of the second bullet of the second paragraph of section 6.5.4 [basic.lookup.argdep].
must apply to it or not. The same stands for a T which is a function template specialization. This detail can make a difference in an example such as the one below:
- If T is a class type (including unions), its associated classes are: the class itself; the class of which it is a member, if any; and its direct and indirect base classes. Its associated namespaces are the namespaces in which its associated classes are defined. [This wording is as updated by core issue 90.]
template<class T> struct slist_iterator { friend bool operator==(const slist_iterator& x, const slist_iterator& y) { return true; } }; template<class T> struct slist { typedef slist_iterator<T> iterator; iterator begin() { return iterator(); } iterator end() { return iterator(); } }; int main() { slist<int> my_list; slist<int>::iterator mi1 = my_list.begin(), mi2 = my_list.end(); // Must the the friend function declaration // bool operator==(const slist_iterator<int>&, const slist_iterator<int>&); // be found through argument dependent lookup? I.e. is the specialization // 'slist<int>' the associated class of the arguments 'mi1' and 'mi2'. If we // apply only the contents of the last bullet of 3.4.2/2, then the type // 'slist_iterator<int>' has no associated classes and the friend declaration // is not found. mi1 == mi2; }
Suggested resolution:
Replace the last bullet of the second paragraph of section 6.5.4 [basic.lookup.argdep]
with
- If T is a template-id, its associated namespaces and classes are the namespace in which the template is defined; for member templates, the member template's class; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which any template template arguments are defined; and the classes in which any member templates used as template template arguments are defined.
- If T is a class template specialization, its associated namespaces and classes are those associated with T when T is regarded as a class type; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which the primary templates making template template arguments are declared; and the classes in which any primary member templates used as template template arguments are declared.
- If T is a function template specialization, its associated namespaces and classes are those associated with T when T is regarded as a function type; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which the primary templates making template template arguments are declared; and the classes in which any primary member templates used as template template arguments are declared.
Replace the second bullet of the second paragraph of section 6.5.4 [basic.lookup.argdep]
with
- If T is a class type (including unions), its associated classes are: the class itself; the class of which it is a member, if any; and its direct and indirect base classes. Its associated namespaces are the namespaces in which its associated classes are defined.
- If T is a class type (including unions), its associated classes are: the class itself; the class of which it is a member, if any; and its direct and indirect base classes. Its associated namespaces are the namespaces in which its associated classes are declared [Note: in case of any of the associated classes being a class template specialization, its associated namespace is acually the namespace containing the declaration of the primary class template of the class template specialization].
Rationale (September, 2012):
The concerns in this issue were addressed by the resolutions of issues 403 and 557.
According to 6.7.7 [class.temporary] paragraphs 4-5,
There are two contexts in which temporaries are destroyed at a different point than the end of the full-expression...
The second context is when a reference is bound to a temporary. The temporary to which the reference is bound or the temporary that is the complete object of a subobject to which the reference is bound persists for the lifetime of the reference...
It is not clear whether this applies to an example like the following:
struct S { }; const S& r = (const S&)S();
In one sense r is being bound to the temporary because the object to which r refers is the temporary object. From another perspective, however, r is being bound not to a temporary but to the lvalue expression (const S&)S(), or, more precisely, to the invented temporary variable described in 7.6.1.9 [expr.static.cast] paragraph 4:
Otherwise, an expression e can be explicitly converted to a type T using a static_cast of the form static_cast<T>(e) if the declaration T t(e); is well-formed, for some invented temporary variable t (9.4 [dcl.init]). The effect of such an explicit conversion is the same as performing the declaration and initialization and then using the temporary variable as the result of the conversion.
(Since the invented variable t is called a “temporary,” perhaps the intent is that its lifetime is extended to that of r, and then the lifetime of the S() temporary would be that of t. However, this reasoning is tenuous, and it may be better to make the intent explicitly clear.)
(See also issue 1299.)
Rationale (April, 2013):
This issue is a duplicate of issue 1376.
The requirement in 6.8 [basic.types] that a literal type must have a constexpr constructor has caused signficant problems with respect to defaulted default constructors, since the determination of whether a constructor is constexpr depends on its definition and a defaulted special member function is only defined if it is odr-used. It might be better to remove that requirement, at least as it applies to defaulted default constructors.
Rationale (September, 2013):
This issue duplicates issue 1360.
According to 7.2.1 [basic.lval] paragraph 4,
Class prvalues can have cv-qualified types; non-class prvalues always have cv-unqualified types.
Presumably an array of a class type should also be permitted to have a cv-qualified type.
Rationale (October, 2012):
This issue is a subset of, and resolved by the resolution of, issue 1261.
The current Standard is not clear regarding the lifetime of a temporary created in the initializer of an init-capture:
void g() { struct S { S(int); ~S(); }; auto x = (S(1), [y = S(2)]{}, S(3)); }
Is the initializer for y considered a full-expression, or does the S(2) temporary persist until the end of the complete x initializer?
Reationale (June, 2014):
This issue is a duplicate of issue 1695.
According to 7.6.1.3 [expr.call] paragraph 11, when a function call is the operand of a decltype-specifier,
a temporary object is not introduced for the prvalue. The type of the prvalue may be incomplete. [Note: as a result, storage is not allocated for the prvalue and it is not destroyed; thus, a class type is not instantiated as a result of being the type of a function call in this context. This is true regardless of whether the expression uses function call notation or operator notation (12.2.2.3 [over.match.oper]). —end note] [Note: unlike the rule for a decltype-specifier that considers whether an id-expression is parenthesized (9.2.9.3 [dcl.type.simple]), parentheses have no special meaning in this context. —end note]
This relaxation of requirements on the return type of a function does not mention abstract classes, so presumably the following example is ill-formed:
struct Abstract { virtual ~Abstract() = 0; }; template<class T> T func(); typedef decltype(func<Abstract>()) type;
However, there is implementation variance on the treatment of the last line.
Rationale (November, 2014):
This issue is a duplicate of issue 1646.
In an expression of the form T(), 7.6.1.4 [expr.type.conv] paragraph 2 requires that T not be an array type. Now that temporary arrays can be created via a braced-init-list (see issue 1232), this restriction should be eliminated.
Rationale (August, 2011):
The implications of array temporaries for the language should be considered by the Evolution Working Group in a comprehensive fashion rather than on a case-by-case basis. See also issues 1307, 1326, and 1525.
Rationale (February, 2014):
This is a duplicate of issue 914.
According to 7.6.2.2 [expr.unary.op] paragraph 10,
There is an ambiguity in the unary-expression ~X(), where X is a class-name or decltype-specifier. The ambiguity is resolved in favor of treating ~ as a unary complement rather than treating ~X as referring to a destructor.
It is not clear whether this is intended to apply to an expression like (~S)(). In large measure, that depends on whether a class-name is an id-expression or not. If it is, the ambiguity described in 7.6.2.2 [expr.unary.op] paragraph 10 does apply; if not, the expression is an unambiguous reference to the destructor for class S. There are several places in the Standard that indicate that the name of a type is an id-expression, but that might be more confusing than helpful.
Rationale (February, 2021):
This issue is a duplicate of, and resolved by the resolution of, issue 1971.
7.6.2.5 [expr.sizeof] paragraph 1 says,
The sizeof operator shall not be applied... to an enumeration type before all its enumerators have been declared...
This prevents use of sizeof with an opaque enumeration type, even though the underlying type of such enumerations is known.
Rationale (May, 2009):
Duplicate of issue 803.
Should it be allowed to use an object of a class type having a single conversion function to an integral type as an array size in the first bound of the type in an array new?
struct A { operator int(); } a; int main () { new int[a]; }
There are similar accommodations for the expression in a delete (7.6.2.9 [expr.delete] paragraph 1) and in a switch (8.5.3 [stmt.switch] paragraph 2) . There is also widespread existing practice on this (g++, EDG, MSVC++, and Sun accept it, and even cfront 3.0.2).
Rationale (October, 2004):
Duplicate of issue 299.
Does the Standard require that the deallocation function will be called if the destructor throws an exception? For example,
struct S { ~S() { throw 0; } }; void f() { try { delete new S; } catch(...) { } }
The question is whether the memory for the S object will be freed or not. It doesn't appear that the Standard answers the question, although most people would probably assume that it will be freed.
Notes from 04/01 meeting:
There is a widespread feeling that it is a poor programming practice to allow destructors to terminate with an exception (see issue 219). This question is thus viewed as a tradeoff between efficiency and supporting "bad code." It was observed that there is no way in the current language to protect against a throwing destructor, since the throw might come from a virtual override.
It was suggested that the resolution to the issue might be to make it implementation-defined whether the storage is freed if the destructor throws. Others suggested that the Standard should require that the storage be freed, with the understanding that implementations might have a flag to allow optimizing away the overhead. Still others thought that both this issue and issue 219 should be resolved by forbidding a destructor to exit via an exception. No consensus was reached.
Rationale (October, 2008):
It was noticed that issue 353, an exact duplicate of this one, was independently opened and resolved.
According to 8.6.5 [stmt.ranged] paragraph 1, the functions begin and end are looked up “with argument-dependent lookup (6.5.4 [basic.lookup.argdep])” for non-array, non-class types and for class types with no members of those names. It seems surprising that the lookup is different from the lookup that would result if the for statement were replaced by its nominal expansion, i.e., including (as does the referenced section, 6.5.4 [basic.lookup.argdep]) the result of ordinary unqualified lookup as well as the lookup in associated namespaces.
Rationale (February, 2012):
This issue is a duplicate of issue 1442.
Because the reference __range in the expansion of a range-based for statement, as described in 8.6.5 [stmt.ranged] paragraph 1, is bound only to the top-level expression of range-init, the lifetime of temporaries created at lower levels in that expression expires before the body of the loop is reached, leading to dangling references. It would be helpful if the lifetime of those temporaries were extended over the entire statement.
(See also issue 1523 for another question regarding the rewritten form of the range-based for.)
Rationale (October, 2012):
This is a duplicate of issue 900.
Given the example,
struct A{ operator auto(){ return 0; } }; int main(){ A a; a.operator auto(); // #1 a.operator int(); // #2 }
there is implementation divergence regarding which, if either, of the calls is well-formed. MSVC and clang reject #2, g++ rejects #1, and EDG rejects both.
According to 9.2.9.6.1 [dcl.spec.auto.general] paragraph 6:
A program that uses a placeholder type in a context not explicitly allowed in 9.2.9.6 [dcl.spec.auto] is ill-formed.
The use of auto as a conversion-type-id in a function call is not mentioned in that section; however, the section is dealing with declarative contexts rather than expressions, so it's not clear how much weight that observation should carry.
Rationale (December, 2021):
This issue is a duplicate of issue 1670.
According to 9.3.4 [dcl.meaning] paragraph 1, the declarator in the definition or explicit instantiation of a namespace member can only be qualified if the definition or explicit instantiation appears outside the member's namespace:
A declarator-id shall not be qualified except for the definition of a member function (11.4.2 [class.mfct]) or static data member (11.4.9 [class.static]) outside of its class, the definition or explicit instantiation of a function or variable member of a namespace outside of its namespace, or the definition of a previously declared explicit specialization outside of its namespace, or the declaration of a friend function that is a member of another class or namespace (11.8.4 [class.friend]). When the declarator-id is qualified, the declaration shall refer to a previously declared member of the class or namespace to which the qualifier refers, and the member shall not have been introduced by a using-declaration in the scope of the class or namespace nominated by the nested-name-specifier of the declarator-id.
There is no similar restriction on a qualified-id in a class definition (Clause 11 [class] paragraph 5):
If a class-head contains a nested-name-specifier, the class-specifier shall refer to a class that was previously declared directly in the class or namespace to which the nested-name-specifier refers (i.e., neither inherited nor introduced by a using-declaration), and the class-specifier shall appear in a namespace enclosing the previous declaration.
An elaborated-type-specifier in an explicit instatiation containing a qualified-id is also not prohibited from appearing in the namespace nominated by its nested-name-specifier (13.9.3 [temp.explicit] paragraph 2):
An explicit instantiation shall appear in an enclosing namespace of its template. If the name declared in the explicit instantiation is an unqualified name, the explicit instantiation shall appear in the namespace where its template is declared.
(This asymmetry is due to the removal of inappropriate mention of classes in 9.3.4 [dcl.meaning] by issue 40 and a failure to insert the intended restrictions elsewhere.)
An example of this inconsistency is:
namespace N { template <class T> struct S { }; template <class T> void foo () { } template struct N::S<int>; // OK template void N::foo<int>(); // ill-formed }
It is not clear that any purpose is served by the “outside of its namespace” restriction on declarators in definitions and explicit instantiations; if possible, it would be desirable to reconcile the treatment of declarators and class names by removing the restriction on declarators (which appears to be widespread implementation practice, anyway).
Rationale (April, 2006):
This is the same as issue 482.
The current wording of 9.3.4.6 [dcl.fct] paragraph 6 encompasses more than it should:
If the type of a parameter includes a type of the form “pointer to array of unknown bound of T” or “reference to array of unknown bound of T,” the program is ill-formed. [Footnote: This excludes parameters of type “ptr-arr-seq T2” where T2 is “pointer to array of unknown bound of T” and where ptr-arr-seq means any sequence of “pointer to” and “array of” derived declarator types. This exclusion applies to the parameters of the function, and if a parameter is a pointer to function or pointer to member function then to its parameters also, etc. —end footnote]
The normative wording (contrary to the intention expressed in the footnote) excludes declarations like
template<class T> struct S {}; void f(S<int (*)[]>);
and
struct S {}; void f(int(*S::*)[]);
but not
struct S {}; void f(int(S::*)[]);
Rationale (November, 2014):
This issue is a duplicate of issue 393.
9.3.4.6 [dcl.fct] paragraph 5 specifies that cv-qualifiers are deleted from parameter types. However, it's not clear what this should mean for function templates. For example,
template<class T> struct A { typedef A arr[3]; }; template<class T> void f(const typename A<T>::arr) { } template void f<int>(const A<int>::arr); // #1 template <class T> struct B { void g(T); }; template <class T> void B<T>::g(const T) { } // #2
If cv-qualifiers are dropped, then the explicit instantiation in #1 will fail to match; if cv-qualifiers are retained, then the definition in #2 does not match the declaration.
Rationale (August, 2010):
This is a duplicate of issue 1001.
9.3.4.7 [dcl.fct.default] paragraph 4 says:
For non-template functions, default arguments can be added in later declarations of a functions in the same scope.Why say for non-template functions? Why couldn't the following allowed?
template <class T> struct B { template <class U> inline void f(U); }; template <class T> template <class U> inline void B<T>::f(U = int) {} // adds default arguments // is this well-formed? void g() { B<int> b; b.f(); }If this is ill-formed, chapter 14 should mention this.
Rationale: This is sufficiently clear in the standard. Allowing additional default arguments would be an extension.
Notes from October 2002 meeting:
The example here is flawed. It's not clear what is being requested. One possibility is the extension introduced by issue 226. Other meanings don't seem to be useful.
It is not clear whether the following declaration is well-formed:
struct S { int i; } s = { { 1 } };According to 9.4.2 [dcl.init.aggr] paragraph 2, a brace-enclosed initializer is permitted for a subaggregate of an aggregate; however, i is a scalar, not an aggregate. 9.4 [dcl.init] paragraph 13 says that a standalone declaration like
int i = { 1 };is permitted, but it is not clear whether this says anything about the form of initializers for scalar members of aggregates.
This is (more) clearly permitted by the C89 Standard.
Rationale (May, 2008):
Issue 632 refers to exactly the same question and has a more detailed discussion of the considerations involved.
Issue 1696 did not fully address recursive references in default member initializers for aggregates, for example:
struct S { int i = (S{}, 0); };
or
struct S { int i = noexcept(S{}); };
In an example like
const int&r {1};
the expectation is that this creates a temporary of type const int containing the value 1 and binds the reference to it. And it does, but along the way it creates two temporaries. The wording in 9.4.5 [dcl.init.list] paragraph 3, the bullet on reference initialization, says that a prvalue temporary of type const int is created, and then we do reference binding. Because this is a non-class case and the source is a prvalue, we end up in the section of 9.4.4 [dcl.init.ref] that says we create a temporary (again of type const int) and initialize it from the source. So we've created two temporaries. Now, this may not matter to anyone, since the discarded temporary is not observable, but it may be a concern that the reference is not binding directly to the temporary created for the {1}, since we do sometimes base behavior on the “bind directly” attribute.
Rationale (September, 2012):
This issue is based on the wording prior to the application of the resolution of issue 1288. With that change, there is no longer a problem.
Consider:
void f() { tuple<int, int> a; auto &[x, y] = a; [x] {}; // ok, captures reference to int member of 'a' by value [&] { use(x); }; // ok, capture reference by reference } void g() { struct T { int a, b; } a; auto &[x, y] = a; [x] {}; // ill-formed, 'x' does not name a variable [&] { use(x); }; // ??? }
The standard is silent on whether and how identifiers of a decomposition declaration can be captured by a lambda.
Rationale (July, 2017):
This issue is a duplicate of issue 2308.
Given a namespace-scope declaration like
template<typename T> T var = T();
should T<const int> have internal linkage by virtue of its const-qualified type? Or should it inherit the linkage of the template?
Notes from the February, 2014 meeting:
CWG noted that linkage is by name, and a specialization of a variable template does not have a name separate from that of the variable template, thus the specialization will have the linkage of the template.
Rationale (February, 2021):
This issue is a duplicate of, and resolved by the resolution of, issue 2387.
WG14 intends to support alignment specifications in their next Standard. WG21 should explore possibilities for compatibility between C and C++ for these specifications. See paper N3093.
Rationale (August, 2010):
This is a duplicate of issue 1115.
The grammar for member-declaration in 11.4 [class.mem] does not allow an alias-declaration as a class member. This seems like an oversight.
Rationale (August, 2010):
This issue is a duplicate of 924.
The class
struct A { const int i; };
was previously considered a POD class, but it no longer is, because it has a non-trivial (deleted) copy assignment operator. The impact of this change is not clear.
Rationale (August, 2010):
This is a duplicate of issue 1140.
11.4.9.3 [class.static.data] paragraph 3 says,
If a static data member is of const literal type, its declaration in the class definition can specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression. A static data member of literal type can be declared in the class definition with the constexpr specifier; if so, its declaration shall specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression. [Note: In both these cases, the member may appear in constant expressions. —end note]
The note is misleading; to be used for its value in a constant expression, the static data member must either be declared constexpr or have integral or enumeration type. Though strictly speaking, the note is true, because any static data member, initialized or not, may appear in an address constant expression if its address is taken.
I think the right fix is to change “const literal” back to “const integral or const enumeration.” It would also be nice to avoid the duplication of text.
Rationale (November, 2010):
This is a duplicate of issue 1101.
Although accepted by several compilers and used in popular code, the grammar currently does not permit the use of a dependent template name in a base-specifier or mem-initializer-id, for example:
template<typename T, typename U> struct X : T::template apply<U> { };
There does not seem to be a good reason to reject this usage.
Rationale (August, 2010):
This issue is a duplicate of issue 314.
Consider the following example:
struct B { void f(){} }; class N : protected B { }; struct P: N { friend int main(); }; int main() { N n; B& b = n; // R b.f(); }
This code is rendered well-formed by bullet 3 of 11.8.3 [class.access.base] paragraph 4, which says that a base class B of N is accessible at R if
R occurs in a member or friend of a class P derived from N, and an invented public member of B would be a private or protected member of P
This provision circumvents the additional restrictions on access to protected members found in 11.8.5 [class.protected] — main() could not call B::f() directly because the reference is not via an object of the class through which access is obtained. What is the purpose of this rule?
Rationale (April, 2010):
This is a duplicate of issue 472.
The resolution of issue 597 and anticipated resolution of issue 1517 allow access to non-virtual base classes outside the lifetime of the object. However, for no apparent reason, references to nonstatic data members are still prohibited. This disparity should be rectified.
Rationale (November, 2014):
This issue is a duplicate of issue 1530.
It looks like the resolution to issue 1138 neglected to update 12.2.2.7 [over.match.ref] (in N3225) or otherwise mention it from 9.4.4 [dcl.init.ref] for the rvalue cases.
Since this is a context where overload resolution is required (to find the correct conversion function), 12.2.2.7 [over.match.ref] should probably be used (12.2 [over.match] paragraph 2); however, there are some oddities.
(Issue 1)
Consider:
struct A { typedef void functype(); operator functype&&(); }; void (&&x)() = A();
We are looking for a function lvalue; 12.2.2.7 [over.match.ref] (if we take it to be applicable) says the viable functions are limited to conversion functions yielding “lvalue reference” when 9.4.4 [dcl.init.ref] requires an lvalue result. The above would then fail to have any candidate conversion functions and we are left with a non-viable indirect binding to a “function temporary.”
(Issue 2)
Also, since the candidate functions in the case where an rvalue (that is prvalue or xvalue) result is required do not include ones which return lvalue references, I do not see what the wording regarding the second standard conversion sequence having an lvalue-to-rvalue transformation added in issue 1138 is meant to catch.
If the example containing
int&& rri2 = X();
with a comment about operator int&() is a clue, then it seems that 12.2.2.7 [over.match.ref] is being ignored.
(Conclusion)
It would seem that 12.2.2.7 [over.match.ref] should apply (and be fixed to match the cases from issue1138), the verbiage about the second standard conversion is redundant, and the explanation in the example is wrong.
In particular, the previous wording for 9.4.4 [dcl.init.ref] did have distinct bullets for converting to an “lvalue” and to an “rvalue;” we now have a bullet which is not exclusively one or the other.
Possible fix
Add reference to [12.2.2.7 [over.match.ref]] in 9.4.4 [dcl.init.ref] for direct binding to rvalue reference/const non-volatile via UDC.
Remove redundent sentence referring to second SCS.
Modify example to indicate operator int&() is not a candidate function.
Clarify that the point from 9.4.4 [dcl.init.ref] below:
has a class type (i.e., T2 is a class type), where T1 is not reference-related to T2, and can be implicitly converted to an xvalue, class prvalue, or function lvalue of type “cv3 T3,” where “cv1 T1” is reference-compatible with “cv3 T3”...
is an rvalue case for 12.2.2.7 [over.match.ref] for non-function types and lvalue case for function types.
Fix 12.2.2.7 [over.match.ref] to allow candidate functions return rvalue reference to function type for lvalue cases.
Update 110510:
The example appears to be actually well-formed because the wording about the second SCS is not triggered. Falling through to indirect binding then succeeds.
Rationale (February, 2012):
This issue is a duplicate of issue 1328.
Rationale (August, 2017):
This issue is a duplicate of issue 2243.
According to 12.2.4.2.5 [over.ics.ref] paragraph 3,
Except for an implicit object parameter, for which see 12.2.2 [over.match.funcs], a standard conversion sequence cannot be formed if it requires binding an lvalue reference to non-const to an rvalue or binding an rvalue reference to an lvalue.
This isn't precisely the restriction placed by 9.4.4 [dcl.init.ref] on binding an lvalue reference to an rvalue; the requirement there is that the cv-qualification must be exactly const in such cases. This has an impact on the interpretation of the following example:
void f(const volatile int&); void f(...); void g() { f(1); }
Because f(const volatile int&) is considered a viable function for the call, it is a better match than f(...), but the binding of the argument to the parameter cannot be done, so the program is ill-formed. Presumably “lvalue reference to non-const” should be clarified to exclude the const volatile case. (Implementations vary on their handling of this example.)
Rationale (November, 2010):
This is a duplicate of issue 1152.
The following example is ambiguous according to the Standard:
struct Y { operator int(); operator double(); }; void f(Y y) { double d; d = y; // Ambiguous: Y::operator int() or Y::operator double()? }
The reason for the ambiguity is that 12.5 [over.built] paragraph 18 says that there are candidate functions double& operator=(double&, int) and double& operator=(double&, double) (among others). In each case, the second argument is converted by a user-defined conversion sequence (12.2.4.2.3 [over.ics.user]) where the initial and final standard conversion sequences are the identity conversion — i.e., the conversion sequences for the second argument are indistinguishable for each of these candidate functions, and they are thus ambiguous.
Intuitively one might expect that, because it converts directly to the target type in the assignment, Y::operator double() would be selected, and in fact, most compilers do select it, but there is currently no rule to distinguish between these user-defined conversions. Should there be?
Additional note (May, 2008):
Here is another example that is somewhat similar:
enum En { ec }; struct S { operator int(); operator En(); }; void foo () { S() == 0; // ambiguous? }
According to 12.5 [over.built] paragraph 12, the candidate functions are
where R is int and L is every promoted arithmetic type. Overload resolution proceeds in two steps: first, for each candidate function, determine which implicit conversion sequence is used to convert from the argument type to the parameter type; then compare the candidate functions on the basis of the relative costs of those conversion sequences.
In the case of operator==(int, int) there is a clear winner: S::operator int() is chosen because the identity conversion int -> int is better than the promotion En -> int. For all the other candidates, the conversion for the first parameter is ambiguous: both S::operator int() and S::operator En() require either an integral conversion (for integral L) or a floating-integral conversion (for floating point L) and are thus indistinguishable.
These additional candidates are not removed from the set of viable functions, however; because of 12.2.4.2 [over.best.ics] paragraph 10, they are assigned the “ambiguous conversion sequence,” which “is treated as a user-defined sequence that is indistinguishable from any other user-defined conversion sequence.” As a result, all the viable functions are indistinguishable and the call is ambiguous. Like the earlier example, one might naively think that the exact match with S::operator int() and bool operator==(int, int) would be selected, but that is not the case.
Rationale (August, 2010):
Duplicate of issue 260.
John Spicer: The standard does say that a namespace scope template has external linkage unless it is a function template declared "static". It doesn't explicitly say that the linkage of the template is also the linkage of the instantiations, but I believe that is the intent. For example, a storage class is prohibited on an explicit specialization to ensure that a specialization cannot be given a different storage class than the template on which it is based.
Mike Ball: This makes sense, but I couldn't find much support in the document. Sounds like yet another interpretation to add to the list.
John Spicer: The standard does not talk about the linkage of instantiations, because only "names" are considered to have linkage, and instances are not really names. So, from an implementation point of view, instances have linkage, but from a language point of view, only the template from which the instances are generated has linkage.
Mike Ball: Which is why I think it would be cleaner to eliminate storage class specifiers entirely and rely on the unnamed namespace. There is a statement that specializations go into the namespace of the template. No big deal, it's not something it says, so we live with what's there.
John Spicer: That would mean prohibiting static function templates. I doubt those are common, but I don't really see much motivation for getting rid of them at this point.
"export" is an additional attribute that is separate from linkage, but that can only be applied to templates with external linkage.
Mike Ball: I can't find that restriction in the standard, though there is one that templates in an unnamed namespace can't be exported. I'm pretty sure that we intended it, though.
John Spicer: I can't find it either. The "inline" case seems to be addressed, but not static. Surely this is an error as, by definition, a static template can't be used from elsewhere.
Rationale: Duplicate of Core issue 69.
Currently, 13.4.3 [temp.arg.nontype] paragraph 1 only requires that an object whose address is used as a non-type template argument have external linkage, thus allowing objects of thread storage duration to be used. The requirement should presumably be for an object to have static storage duration as well as external linkage.
Rationale (August, 2010):
This is a duplicate of issue 1154.
The Standard is not clear on the treatment of an example like the following, and there is implementation variance:
template<class ...Types> struct Tuple_ { // _VARIADIC_TEMPLATE
template<Types ...T, int> int f() {
return sizeof...(Types);
}
};
int main() {
Tuple_<char,int> a;
int b = a.f<1, 2, 3>();
}
Rationale (February, 2019):
This issue is covered in more detail in issue 2395.
See also issue 2025.
Given the declarations
template<int> using T = int; template<typename U> void h(T<f(U())>); template<typename U> void h(T<g(U())>);
Does this declare one function template or two?
Rationale (November, 2014):
This issue is a duplicate of issue 1980.
Consider the following example:
template<typename ...T> struct X { void f(); static int n; }; template<typename T, typename U> using A = T; template<typename ...T> void X<A<T, decltype(sizeof(T))>...>::f() {} template<typename ...T> void X<A<T, decltype(sizeof(T))>...>::n = 0; void g() { X<void>().f(); X<void>::n = 1; }
Should this be valid? The best answer would seem to be to produce an error during instantiation, and that appears to be consistent with the current Standard wording, but there is implementation divergence.
See also issue 2021.
Rationale (May, 2015):
This issue is a duplicate os issue 1979.
The description of how the partial ordering of template functions is determined in 13.7.7.3 [temp.func.order] paragraphs 3-5 does not make any provision for nondeduced template parameters. For example, the function call in the following code is ambiguous, even though one template is "obviously" more specialized than the other:
template <class T> T f(int); template <class T, class U> T f(U); void g() { f<int>(1); }The reason is that neither function parameter list allows template parameter T to be deduced; both deductions fail, so neither template is considered more specialized than the other and the function call is ambiguous.
One possibility of addressing this situation would be to incorporate explicit template arguments from the call in the argument deduction using the transformed function parameter lists. In this case, that would result in finding the first template to be more specialized than the second.
Rationale (04/00):
This issue is covered in a more general context in issue 214.
There appears to be no requirement that a redeclaration of an alias template must be equivalent to the earlier one. An alias-declaration is not a definition (6.2 [basic.def] paragraph 2), so presumably an alias template declaration is also not a definition and thus the ODR does not apply.
Rationale (November, 2014):
This issue is superseded by issue 1896.
Section 13.8 [temp.res] paragraph 4 uses the following example to show that qualified name lookup described in Section 6.5.5 [basic.lookup.qual] applies even in the presence of "typename":
struct A { struct X { } ; int X ; } ; template<class T> void f(T t) { typename T::X x ; // ill-formed: finds the data member X // not the member type X }
This example is confusing because the definition of the template function itself is not ill formed unless it is instantiated with "A" as the template parameter. In other words, the example should be modified to something like:
struct A { struct X { } ; int X ; } ; struct B { struct X { } ; } ; template<class T> void f(T t) { typename T::X x ; } void foo() { A a ; B b ; f(b) ; // OK -- finds member type B::X. f(a) ; // ill-formed: finds the data member A::X not // the member type A::X. }
Notes from October 2002 meeting:
This is a duplicate of Core Issue 345.
According to 13.8.3 [temp.dep] paragraph 3,
In the definition of a class or class template, if a base class depends on a template-parameter, the base class scope is not examined during unqualified name lookup either at the point of definition of the class template or member or during an instantiation of the class template or member.
Note that this is phrased not as “if a base class is a dependent type” but as “if a base class depends on a template-parameter;” the current instantiation depends on a template-parameter but is not a dependent type. The difference can be seen in this example:
template<typename T> struct A {
typedef int type;
struct C;
};
template<typename T> struct A<T>::C {
void type();
struct B;
};
template<typename T> struct A<T>::C::B : A<T> {
type x;
};
A<int>::C::B b; // #1
If the excluded bases were dependent types, the reference to type at #1 would resolve to A::type; with the current wording, the reference resolves to C::type.
(See also issue 1524 for another case in which this distinction makes a difference.)
Rationale (September, 2012):
This issue is a duplicate of issue 591.
According to 13.9.4 [temp.expl.spec] paragraph 15,
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized. [Example:
template<class T1> class A { template<class T2> class B { void mf(); }; }; template<> template<> class A<int>::B<double>; template<> template<> void A<char>::B<char>::mf();—end example]
However, in the declaration of A<int>::B<double>, A<int> is not explicitly instantiated, it is implicitly instantiated.
Rationale (November, 2014):
This issue is a duplicate of issue 529.
It would be useful to be able to deduce an array bound from the number of elements in an initializer list.
Rationale (August, 2011):
The implications of array temporaries for the language should be considered by the Evolution Working Group in a comprehensive fashion rather than on a case-by-case basis. See also issues 1300, 1307, and 1525.
Rationale (January, 2021):
This issue is a duplicate of issue 1591.
Consider the following example:
template<typename T, int N> void g(T (* const (&)[N])(T)) { } int f1(int); int f4(int); char f4(char); void f() { g({ &f1, &f4 }); // OK, T deduced to int, N deduced to 2? }
There is implementation divergence on the handling of this example. According to 13.10.3.2 [temp.deduct.call] paragraph 1,
If removing references and cv-qualifiers from P gives std::initializer_list<P'> or P'[N] for some P' and N and the argument is a non-empty initializer list (9.4.5 [dcl.init.list]), then deduction is performed instead for each element of the initializer list, taking P' as a function template parameter type and the initializer element as its argument, and in the P'[N] case, if N is a non-type template parameter, N is deduced from the length of the initializer list.
Deduction fails for the &f4 element fails due to ambiguity, so by 13.10.3.6 [temp.deduct.type] bullet 5.5.1 the function parameter is a non-deduced context.
It is not clear, however, whether that implies that the function parameter is a non-deduced context from the perspective of the entire deduction, so we cannot deduct T and N, or if it's only a non-deduced context for this slice of the initializer list deduction and we can still deduce the template parameters from the &f1 element.
See also issue 1513.
Rationale, July, 2017
This issue is a duplicate of issue 2318.
Rationale (August, 2011):
This issue duplicates issue 455.
14.5 [except.spec] paragraph 1 says,
An exception-specification shall appear only on a function declarator in a function, pointer, reference or pointer to member declaration or definition.This wording forbids exception specifications in declarations where they might plausibly occur (e.g., an array of function pointers). This restriction seems arbitrary. It's also unclear whether this wording allows declarations such as
void (*f())() throw(int); // returns a pointer to a function // that might throw "int"
At the same time, other cases are allowed by the wording in paragraph 1 (e.g., a pointer to a pointer to a function), but no checking for such cases is specified in paragraph 3. For example, the following appears to be allowed:
void (*p)() throw(int); void (**pp)() throw() = &p;
Rationale (10/99): Duplicate of issues 87 and 92.
A type used in an exception specification must be complete (14.5 [except.spec] paragraph 2). The resolution of issue 437 stated that a class type appearing in an exception specification inside its own member-specification is considered to be complete. Should this also apply to exception specifications in class templates instantiated because of a reference inside the member-specification of a class? For example,
template<class T> struct X { void f() throw(T) {} }; struct S { X<S> xs; };
Note, January, 2012:
With the deprecation of dynamic-exception-specifications, the importance of this issue is reduced even further. The current specification is clear, and the suggested resolution is an extension. It has been suggested that the issue be closed as NAD.
Notes from the February, 2012 meeting:
The outcome of this issue will be affected by the resolution of issue 1330. See also issue 287.
This issue is subsumed by the newer issue 1330 and should be discussed in that context.
The expected behavior of the following example is not clear:
template<class T> struct Y { typedef typename T::value_type blah; void swap(Y<T> &); }; template<class T> void swap(Y<T>& Left, Y<T>& Right) noexcept(noexcept(Left.swap(Right))) { } template <class T> struct Z { void swap(Z<T> &); }; template<class T> void swap(Z<T>& Left, Z<T>& Right) noexcept(noexcept(Left.swap(Right))) { } Z<int> x00, y00; constexpr bool b00 = noexcept(x00.swap(y00)); // Instantiates the Z<int> overload: template void swap<int>(Z<int>&, Z<int>&) noexcept(b00);
The question is whether the explicit instantiation directive also instantiates the Y<int> overload and thus Y<int> (because of the exception specification), which will fail because of the reference to T::value_type with T=int.
According to 14.5 [except.spec] bullet 13.3, one of the contexts in which an exception specification is needed (thus triggering its instantiation) is when:
the exception specification is compared to that of another declaration (e.g., an explicit specialization or an overriding virtual function);
In this example, the declarations of swap must be compared in order to determine which function template is being instantiated, resulting in the instantiation of Y<int>. There is implementation divergence, however, with some accepting the example and some issuing an error for the instantiation of Y<int>.
Rationale (February, 2022): Duplicate of issue 2417.
The example in 17.6.3.4 [new.delete.placement] reads:
[Example: This can be useful for constructing an object at a known address:This example has potential alignment problems. One way to correct it would be to change the definition of place to read:char place[sizeof(Something)]; Something* p = new (place) Something();—end example]
char* place = new char[sizeof(Something)];
Rationale (10/99): This is an issue for the Library Working Group.
Access declarations were removed from C++11 but are not mentioned in C.5 [diff.cpp03].
Rationale (January, 2012):
Issue 1279 already deals with missing differences between C++03 and C++11; this specific item has been added to the list there.
In the example in _N2914_.14.10.1.1 [concept.fct] paragraph 10,
concept EqualityComparable<typename T> { bool operator==(T, T); bool operator!=(T x, T y) { return !(x == y); } }
is the call to operator== in the default implementation well-formed, or is another requirement needed to allow the arguments to be passed by value? If another requirement is needed, should it be added in this example, or should the rules for implicit requirements be changed so that the example is well-formed?
According to _N2914_.14.11.1.1 [temp.req.sat] paragraph 2, a concept requirement is satisfied if a concept map with the same name and template argument list is found by concept map lookup. The point at which the name of a concept map is inserted into its scope is, according to _N2914_.14.10.2 [concept.map] paragraph 2, immediately following its concept-id. This enables a requirement on a member of a concept map to be satisfied by the concept map in which it appears, for example:
concept C2<typename T>{} concept D2<typename T> { typename type; requires C2<type>; } template<D2 T> concept_map C2<T>{} concept_map D2<int> { typedef int type; // Okay }
However, these rules might lead to problems with the concept maps that the compiler tries but fails to generate for auto concepts. Presumably a compiler might insert the name of the generated concept map into the containing scope, so that it can satisfy its own requirements, but then if some other requirement cannot be satisfied and thus the concept map is not defined after all (_N2914_.14.10.2 [concept.map] paragraph 11), the name must then be removed again. It might be clearer to make the point of definition for a concept map after the closing brace and just have a special case for how the concept map is handled within its own definition.
On a related note, the current specification seems unclear about whether a failure to generate a concept map for an auto concept means that no further attempts will be made to generate it. Consider the following example:
auto concept A<typename X> { /* */ } auto concept B<typename X> : A<X> { void enabler(X); } template <A T> void f(T x); // #1 template <B T> void f(T x); // #2 class C { // a class that satisfies A<C> but not B<C> // because no enabler(X) in scope }; int foo() { C x; f(x); // #3 } void enabler(C); int bar() { C x; f(x); // #4 }
At #3, the concept map for B cannot be generated, so the call invokes #1. There doesn't appear to be anything currently that indicates that the reference at #4 should not once again attempt to generate the concept map for B, which will succeed this time and make the call invoke #2. It seems preferable that both calls should invoke #1, but that does not seem to be the effect of the current wording.
Given a concept and an unconstrained template, e.g.,
auto concept HasFoo<typename T> { void foo(T&); } template<typename T> struct SomeThing { void bar(){} };
how can one write a concept map template that adapts all specializations of SomeThing to concept HasFoo? Because a concept map template is a constrained context, referring to SomeThing violates the prohibition against using a specialization of an unconstrained template.
Surrounding the entire concept map template with late_check would appear not to work; the location of the late_check is in the unconstrained context, and late_check is ignored in unconstrained contexts.
One possibility would be to allow late_check to appear in the concept_map syntax.
_N2914_.14.10.2.1 [concept.map.fct] paragraph 3 says,
Construct an expression E (as defined below) in the scope of the concept map.
This is the wrong context for this expression. Requirement members are visible to name lookup, and they are obviously not a desirable lookup result; names within the concept map should be invisible during the evaluation of E. Presumably this should read,
...in the scope in which the concept map is defined.
_N2914_.14.10.2.1 [concept.map.fct] paragraph 5 says,
Each satisfied associated function (or function template) requirement has a corresponding associated function candidate set. An associated function candidate set is a candidate set (_N2914_.14.11.3 [temp.constrained.set]) representing the functions or operations used to satisfy the requirement. The seed of the associated function candidate set is determined based on the expression E used to determine that the requirement was satisfied.
If the evaluation of E involves overload resolution at the top level, the seed is the function (12.2.2 [over.match.funcs]) selected by the outermost application of overload resolution (Clause 12 [over]).
Otherwise, if E is a pseudo destructor call (_N4778_.7.6.1.4 [expr.pseudo]), the seed is a pseudo-destructor-name.
Otherwise, the seed is the initialization of an object.
It is not clear that this takes built-in operators into account. For example:
concept C<class T, class U> { typename R; R operator+( T, U ); } concept_map C<int, double> {}
Is the following well-formed?
auto concept HasDestructor<typename T> { T::~T(); } concept_map HasDestructor<int&> { }
According to _N2914_.14.10.2.1 [concept.map.fct] paragraph 4, the destructor requirement in the concept map results in an expression x.~X(), where X is the type int&. According to _N4778_.7.6.1.4 [expr.pseudo], this expression is ill-formed because the object type and the type-name must be the same type, but the object type cannot be a reference type (references are dropped from types used in expressions, Clause 7 [expr] paragraph 5).
It is not clear whether this should be addressed by changing _N4778_.7.6.1.4 [expr.pseudo] or _N2914_.14.10.2.1 [concept.map.fct].
It is possible that under some circumstances an expression created under the rules of _N2914_.14.10.2.1 [concept.map.fct] might be syntactically ambiguous with declarations, in which case they would be interpreted as declarations and not expressions. It would be helpful to have an explicit statement to the effect that the expressions created by these rules shall always be interpreted as expressions and never as declarations.
Given the following example:
auto concept C<typename T, typename U> { Returnable U; typename type = T&&; U::U(type); }
_N2914_.14.10.2.2 [concept.map.assoc] paragraph 5 says,
If an associated type or class template (_N2914_.14.10.1.2 [concept.assoc]) has a default value, a concept map member satisfying the associated type or class template requirement shall be implicitly defined by substituting the concept map arguments into the default value.
It is not clear what the order of processing should be between this step and the formation of the expression in _N2914_.14.10.2.1 [concept.map.fct]. Deduction of the associated type (in _N2914_.14.10.2.2 [concept.map.assoc] paragraph 4) isn't used in this example, but in general requires the expression, but the expression can't be created without the definition of the associated type. Perhaps the approach should be to attempt to define the expression, fail for want of the associated type, apply the default, and then try to define the expression again. Whatever the answer, this needs to be spelled out more clearly.
The example in _N2914_.14.10.3.2 [concept.refine.maps] paragraph 3 reads:
concept C<typename T> { } concept D<typename T, typename U> : C<T> { } template<typename T> struct A { }; template<typename T> concept_map D<A<T>, T> { } ...
Since all concept maps templates are constrained templates, we know that we're in a constrained context at the point of the concept_map keyword. Then the first argument to D is A<T>, and A is an unconstrained template, so this is ill-formed by _N2914_.14.11 [temp.constrained] paragraph 5:
Within a constrained context, a program shall not require a template specialization of an unconstrained template for which the template arguments of the specialization depend on a template parameter.
Suggestion: make A a constrained template, e.g.,
template<std::ObjectType T> struct A { };
Additional notes (May, 2009):
There are other examples that exhibit the same problem. For example, _N2960_.14.6.8 [temp.concept.map] paragraph 7 has this example:
concept Stack<typename X> { typename value_type; value_type& top(X&); // ... } template<typename T> struct dynarray { T& top(); }; template<> struct dynarray<bool> { bool top(); }; template<typename T> concept_map Stack<dynarray<T>> { typedef T value_type; T& top(dynarray<T>& x) { return x.top(); } }
dynarray needs to be constrained. Similarly, in _N2914_.14.10.2.2 [concept.map.assoc] paragraph 3, in the example
concept Allocator<typename Alloc> { template<class T> class rebind_type; } template<typename T> class my_allocator { template<typename U> class rebind_type; }; template<typename T> concept_map Allocator<my_allocator<T>> { template<class U> using rebind_type = my_allocator<T>::rebind_type; }
my_allocator must be constrained. (Note also the missing template argument in the target of the template alias declaration.)
Trivially copyable type was added in 6.8 [basic.types], so we think that it is necessary to add a concept for trivially copyable type like TriviallyCopyableType.
Notes from the March, 2009 meeting:
It is not clear whether this should be supported here or in _N2914_.20.2.9 [concept.copymove], similar to TriviallyCopyConstructible and TriviallyCopyAssignable.
_N2914_.14.11 [temp.constrained] paragraph 5 says,
Within a constrained context, a program shall not require a template specialization of an unconstrained template for which the template arguments of the specialization depend on a template parameter.
This would appear to indicate that an example like the following is ill-formed:
auto concept C<class T> {}; template<template<class> class T, C U> struct Y { Y() { T<U> x; // Well-formed? } };
because T' is not a constrained template archetype. However, this is not the intended outcome. The wording needs to be clarified on this point (and an example and a note explaining the rationale would be helpful).
(See also issues 849 and 851.)
It should be possible to support boolean constant expressions as requirements without resorting to defining the True concept in the library. Boolean expressions are very likely to be constraints when dealing with non-type template parameters and variadic templates, and constraints in these cases should feel just as natural as constraints on the type system.
The use of && as the separator for a list of requirements has shown itself to be a serious teachability problem. The mental model behind && treats concepts as simple predicates, which ignores the role of concepts in type-checking templates. The more programmers read into the && (and especially try to fake || with && and !), the harder it is for them to understand the role of concept maps. Simply changing the separator to , would eliminate a significant source of confusion.
The example in _N2914_.14.11.1.1 [temp.req.sat] paragraph 6 reads,
concept C<typename T> { } concept D<typename T> { } namespace N2 { template<C T> void f(T); // #1 template<C T> requires D<T> void f(T); // #2 template<C T> void g(T x) { f(x); } ...
The call f(x) is ill-formed without a constraint indicating that x can be passed by value.
Consider the following example:
auto concept A<class T> { int f(T); } auto concept B<class T> { int f(T) template<class U> requires A<U> // brings f(U') into scope auto g(T p, U q) -> decltype(f(p) + f(q)); // Both B<T>::f(T) and A<U>::f(U) needed here // but B<T>::f(T) is hidden by A<U>::f(U) // (declared in the same scope as g's template parameters) }
This is similar to the case that motivated 11.4 [class.mem] paragraph 19:
A constrained member is treated as a constrained template (_N2914_.14.11 [temp.constrained]) whose template requirements include the requirements specified in its member-requirement clause and the requirements of each enclosing constrained template.
See also _N2914_.14.10.1.1 [concept.fct] paragraph 10 for a similar rule for default implementations.
A more general version of this merging of requirements is needed, but it does not appear to exist. _N2914_.14.11.1.2 [temp.req.impl] would seem to be the logical place for such a rule.
The example at the end of _N2914_.14.11.2 [temp.archetype] paragraph 13 reads,
auto concept CopyConstructible<typename T> { T::T(const T&); } template<CopyConstructible T> struct vector; auto concept VectorLike<typename X> { typename value_type = typename X::value_type; X::X(); void X::push_back(const value_type&); value_type& X::front(); } template<CopyConstructible T> requires VectorLike<vector<T>> // vector<T> is an archetype (but not an instantiated archetype) void f(const T& value) { vector<T> x; // OK: default constructor in VectorLike<vector<T> > x.push_back(value); // OK: push_back in VectorLike<vector<T> > VectorLike<vector<T>>::value_type& val = x.front(); // OK: front in VectorLike<vector<T> > }
However, x.push_back(value) is, in fact, ill-formed: there is no relationship between VectorLike<vector<T>>::value_type and T in this example. The function needs one further requirement, e.g., std::SameType<VectorLike<vector<T>>::value_type, T> to allow use of the function parameter value as the argument of the push_back call.
Suppose we have
template<std::ObjectType T> T* f(T* p) { return ++p; // Presumably ok }
7.6.2.3 [expr.pre.incr] paragraph 1 requires that “The type of the operand shall be an arithmetic type or a pointer to a completely-defined effective object type.” At ++p in this example, the type archetype T' is considered to be completely-defined because
A type archetype is considered to be completely defined when it is established
(_N2914_.14.11.2.1 [temp.archetype.assemble] paragraph 1) and 13.9.3 [temp.explicit] paragraph 7 says that an archetype becomes established when
the archetype is used in a context where a complete type is required
So far, so good. Consider use of f(T*) with an incomplete type, for instance:
struct A; // A is not defined yet. A* g(A* p) { return f(p); }
During template argument deduction against the template f(T*), we find that there is a concept map for std::ObjectType<A> because std::ObjectType is a compiler-supported concept, and because A is an object type (6.8 [basic.types]), so the compiler provides the concept map implicitly. Type deduction succeeds, but then we get an instantiation-time error on ++p because A is incomplete.
I see two potential solutions:
We can remove built-in operations for ptr-to-effective-object-type, so that you would have to explicitly require something like std::HasPreincrement<T*> before using ++ on values of type T* in f(T*). Then A's lack of completeness would be indicated when we try to satisfy those requirements automatically (and not at instantiation time).
Alternatively, we can introduce the notion of a compiler-supported concept std::CompleteType<T>, and amend _N2914_.14.11.2.1 [temp.archetype.assemble] so that a type archetype is only considered to be completely-defined if it has that requirement. This would imply that f(T*) above is ill-formed at ++p because T would then be an incomplete effective object type; the user could fix this by inserting requires std::CompleteType<T> after the template-parameter-list, and then the call f(p) would be ill-formed because std::CompleteType<A> would not be satisfied.
If there is no requirement for a destructor for a type, according to _N2914_.14.11.2.1 [temp.archetype.assemble] paragraph 5 its archetype will have a deleted destructor. As a result, several examples in the current wording are ill-formed:
_N2914_.14.10.2 [concept.map] paragraph 11: the add function.
_N2914_.14.10.3.1 [concept.member.lookup] paragraph 3: the f function. (Also missing the copy constructor requirement.)
_N2914_.14.10.3.1 [concept.member.lookup] paragraph 5: the h function. (Also missing the copy constructor requirement.)
_N2914_.14.11.1 [temp.req] paragraph 3: the f function.
_N2914_.14.11.1.1 [temp.req.sat] paragraph 6: the g function. (Also missing the copy constructor requirement.)
_N2914_.14.11.2 [temp.archetype] paragraph 15: the distance function.
_N2914_.14.11.2.1 [temp.archetype.assemble] paragraph 2: the foo function.
_N2914_.14.11.2.1 [temp.archetype.assemble] paragraph 3: the f function.
_N2914_.14.11.4 [temp.constrained.inst] paragraph 4: needed for difference_type.
One possibility would be to add the destructor requirement directly in these examples. Another might be to use std::CopyConstructible instead of a local concept. Yet another would be to consider an implicit requirement for a destructor for std::Returnable and std::VariableType.
The example in _N2914_.14.11.4 [temp.constrained.inst] paragraph 4 is ill-formed. The call to advance(i, 1) in f attempts to pass 1 to a parameter declared as Iter::difference_type, but there is nothing that says that int is compatible with Iter::difference_type.
According to _N2960_.3.3.9 [basic.scope.req] paragraph 2,
In a constrained context (_N2914_.14.11 [temp.constrained]), the names of all associated functions inside the concepts named by the concept requirements in the template's requirements are declared in the same scope as the constrained template's template parameters.
This does not appear to cover the case when the requirement appears in a concept definition:
auto concept B<class T> { void f( T ); } auto concept C<class T> { typename U = int; requires B<U>; // Is void f(U) placed in the scope of C? void g(U x) { f(x); // Ok, finds the 'f(U)' implicitly declared as a // result of the associated requirement B<U>. } } void f(int); void g() { C<char>::f(42); // Ok? }
This program should be well-formed, but the current wording does not make that clear.
Another question that must be addressed is if C also contained an explicit declaration of f(U), either before or after B<U>, and whether it would need a satisfier within a concept map to C.
(See also issue 866.)
_N2960_.6.9 [stmt.late] paragraph 2 consists of the following example:
concept Semigroup<typename T> { T::T(const T&); T operator+(T, T); } concept_map Semigroup<int> { int operator+(int x, int y) { return x * y; } } template<Semigroup T> T add(T x, T y) { T r = x + y; // uses Semigroup<T>::operator+ late_check { r = x + y; // uses operator+ found at instantiation time (not considering Semigroup<T>::operator+) } return r; }
The second comment is correct but incomplete, because the assignment operator is also found at instantiation time. The assignment would be ill-formed outside the late_check block, because the Semigroup concept has no copy assignment operator. The comment should be extended accordingly.
According to 6.5.5 [basic.lookup.qual] paragraph 7,
In a constrained context (_N2914_.14.11 [temp.constrained]), a name prefixed by a nested-name-specifier that nominates a template type parameter T is looked up as follows: for each template requirement C<args> whose template argument list references T, the name is looked up as if the nested-name-specifier referenced C<args> instead of T (_N2960_.3.4.3.3 [concept.qual]), except that only the names of associated types are visible during this lookup. If an associated type of at least one requirement is found, then each name found shall refer to the same type. Otherwise, if the reference to the name occurs within a constrained context, the name is looked up within the scope of the archetype associated with T (and no special restriction on name visibility is in effect for this lookup).
In an example like,
concept A<typename T> { typename assoc_type; } concept B<typename T> { typename assoc_type; } template<typename T> requires A<T> B<T::assoc_type>::assoc_type f();
it is not clear whether the argument T::assoc_type of B “references” T or not.
James Widman: In our mental model (and in our intentions while drafting), we still have a (non-archetype) dependent type for the T in your example, and, even after the SameType requirement is seen, we also have a distinct dependent type to represent A<T>::assoc_type (which itself is distinct from the type of the entity named assoc_type that lives in the scope of the concept A). And those two dependent types (A<T>::assoc_type and T) will both alias the same type archetype when that archetype is established (see the paragraph on establishment in _N2914_.14.11.2 [temp.archetype]).
I think 6.5.5 [basic.lookup.qual] paragraph 6 will read more easily if we change the “references a template parameter” verbiage to a generalized “dependent type” verbiage. (We shied away from that in the past because we wanted to say that there's nothing “dependent” within a constrained context. That's because we wanted to say that all name references are bound to something, overload resolution is done, etc. So certainly there are no instances of deferred name lookup or deferred overload resolution within a constrained context. But we still need to be able to say when a type, template, value or concept instance depends on a template parameter.) I propose we change this wording to read,
In a constrained context (_N2914_.14.11 [temp.constrained]), the identifier of an unqualified-id prefixed by a nested-name-specifier that nominates a dependent type T is looked up as follows: for each template requirement C<args> such that either T or an equivalent type (_N2914_.14.11.1 [temp.req]) is a template argument to C, the identifier of the unqualified-id is looked up as if the nested-name-specifier nominated C<args> instead of T (_N2960_.3.4.3.3 [concept.qual]), except that only the names of associated types and class templates (_N2914_.14.10.1.2 [concept.assoc]) are visible during this lookup. If an associated type or class template of at least one requirement is found, then the unqualified-id shall refer either to the same type or to an equivalent type when its identifier is looked up in each of the concepts of the other requirements where T is a template argument. [Note: no part of the procedure described in the preceding part of this paragraph results in the establishment of an archetype (_N2914_.14.11.2 [temp.archetype]). However, in the event that the unqualified-id is a template-id, one of its template arguments could contain some construct that would force archetype establishment. —end note] Otherwise, the name is looked up within the scope of the archetype aliased by T (and no special restriction on name visibility is in effect for this lookup). [Note: this establishes the archetype of T (if it was not established already). —end note]
(It looks like we have a wording nit to fix in the archetype establishment paragraph: it talks about a type archetype coming into existence “when it is used [in some way].” It seems odd to say that something is used in a particular way before it exists. We should instead say something like “when a necessarily-dependent type that would alias the archetype is used [in some way].”)
(It might also be nice to have a cleanup in the paragraph that introduces the notion of std::SameType and “equivalent types” (_N2914_.14.11.1 [temp.req] paragraph 3) so that the congruence relation is part of the normative text rather than a note.)
6.6 [basic.link] does not specify whether concept names have linkage or not.
Given an example like:
auto concept Conv<typename T, typename U> { U::U(T&&); U::U(const U&); U::~U(); }; template<typename U, typename T> requires Conv<T*, U*> U* f(T* p) { return static_cast<T*&&>(p); }
There is currently no normative wording that makes a T* convertible to a U* in the return statement.
One possible approach would be to take the concept map archetype as specifying an additional case for the pointer conversions in 7.3.12 [conv.ptr].
If we cannot bind references/take address of functions in concept_maps, does that mean we cannot use generic bind in constrained templates? Launch threads with expressions found via concept map lookup? Hit problems creating std::function objects? Does the problem only occur if we use qualified lookup to explicitly name a concept map? Does it only kick in if we rely on the implicit function implementation provided by a concept_map, so some types will work and others won't for the same algorithm?!
Additional note, June, 2009:
Here is an example illustrating the question:
auto concept Fooable<T> { void foo(T); } struct test_type { void foo() { cout << "foo test_type\n"; } }; concept_map Fooable<test_type> { void foo(test_type& t) { t.foo(); } } void foo(int x) { cout "foo int\n"; } template<typename T> requires Fooable<T> function<void(T)> callback(T t) { void(*fn)(T) = foo; return fn; } int main() { auto fn1 = factory(test_type{}); auto fn2 = factory(0); fn1(test_type{}); fn2(0); return 0; }
The expansion of the range-based for statement is given in 8.6.5 [stmt.ranged] paragraph 1 as:
{ auto && __range = ( expression ); for ( auto __begin = std::Range<_RangeT>::begin(__range), __end = std::Range<_RangeT>::end(__range); __begin != __end; ++__begin ) { for-range-declaration = *__begin; statement } }
In a non-templated context, the concept map to std::Range has been dropped, so the operators and initialization will be whatever they would normally be; if the concept map replaced those with some customized version (e.g., if the iterator's ++ were supposed to skip odd-numbered elements), that customized meaning would be lost.
What we really want are the operators associated with the concept map to std::Iterator that was used to satisfy the associated requirement (std::Iterator<iterator>) within std::Range<_RangeT> (in whatever concept map was used to satisfy std::Range<_RangeT>). That is, if the grammar permitted it, we want something like
std::Range<_RangeT>::concept_map ::std::Iterator<std::Range<_RangeT>::iterator>::operator++(_begin)
Another alternative would be, if issue 856 is resolved by injecting the declaration of associated functions into concept definitions, something like
std::Range<_RangeT>::operator++(__begin)
Paper N2762 changed 8.8 [stmt.dcl] paragraph 3 from
...unless the variable has trivial type (6.8 [basic.types])...
to
...unless the variable has scalar type, class type with a trivial default constructor and a trivial destructor, a cv-qualified version of one of these types, or an array of one of the preceding types...
However, this change overrode the colliding change from N2773 that would have changed it to read
...unless the variable has effective trivial type...
The revised wording needs to be changed to allow for archetypes with the appropriate requirements.
If we write
concept C<class T> {} template<C T> struct B { B f(); virtual void g() = 0; };
... it seems reasonable to expect a diagnostic about B<T>::f() not because it doesn't require std::Returnable<B<T>> (which I think should not draw an error), but because g() is a pure virtual function.
Now how about this:
template<C T> struct G { B<T> f() { return B<T>(); } };
Here, I'd like to see an error not because we lack the requirement std::Returnable<B<T>>, but because, when we instantiate B<T'> (as the current wording indicates we must within the definition of G<T>::f()), it turns out to be an abstract class.
Now, it could be that when we instantiate G, we get a different partial specialization of B, and that partial specialization could have a pure virtual member. So you might see an instantiation-time error. But partial specializations present dangers like this anyway.
I suggest we make the rule about Returnable<T> apply only in the case where T is not an instantiated archetype. The rationale is that with an instantiated archetype, it's possible to see at template definition time whether the type is abstract, whereas with a non-instantiated archetype, the only known attributes come from requirements.
I suspect we need similar changes for the declarator section. E.g., for a class template A, we shouldn't need to explicitly require VariableType<A<T>> if we want to declare a variable of type A<T>. Instead, we just instantiate A<T'> (as would be naturally required at the point of definition of a variable of type A<T'>), and issue errors when appropriate like we do with ordinary classes today.
According to 11.4 [class.mem] paragraph 19,
A non-template member-declaration that has a member-requirement (_N2914_.14.11.1 [temp.req]) is a constrained member and shall occur only in a class template (13.7.2 [temp.class]) or nested class thereof. The member-declaration for a constrained member shall declare a member function. A constrained member is treated as a constrained template (_N2914_.14.11 [temp.constrained]) whose template requirements include the requirements specified in its member-requirement clause and the requirements of each enclosing constrained template.
Furthermore, 13.7.3 [temp.mem] paragraph 9 says,
A member template of a constrained class template is itself a constrained template (_N2914_.14.11 [temp.constrained])...
and illustrates this statement with the following example:
concept C<typename T> { void f(const T&); } concept D<typename T> { void g(const T&); } template<C T> class A { requires D<T> void h(const T& x) { f(x); // OK: C<T>::f g(x); // OK: D<T>::g } };
If these passages are taken at face value and a constrained member function is, in fact, “treated as a... template,” there are negative consequences. For example, according to 11.4.5.3 [class.copy.ctor] paragraph 2, a member function template is never considered to be a copy constructor, so a constrained constructor that is in the form of a copy constructor does not suppress the implicit declaration and definition of a default copy constructor. Also, according to 13.7.3 [temp.mem] paragraph 3, a member function template cannot be virtual, so it is not possible to specify a member-requirement clause for a virtual function.
Presumably these consequences are unintended, so the wording that suggests otherwise should be revised to make that clear.
11.4.8.3 [class.conv.fct] paragraph 1 says,
A conversion function is never used to convert a (possibly cv-qualified) object to the (possibly cv-qualified) same object type (or a reference to it), to a (possibly cv-qualified) base class of that type (or a reference to it), or to (possibly cv-qualified) void.
Does this mean that the following example is ill-formed?
auto concept Convertible<typename T, typename U> { operator U(const T&); } template <typename T, typename U> requires Convertible<T, U> U convert(const T& t) { return t; } int main() { convert<int>(42); }
12.5 [over.built] paragraph 11 says,
For every quintuple (C1, C2, T, CV1, CV2), where C2 is a class type, C1 is the same type as C2 or is a derived class of C2, T is an effective object type or a function type, and CV1 and CV2 are cv-qualifier-seqs, there exist candidate operator functions of the form
CV12 T& operator->*(CV1 C1*, CV2 T C2::*);
where CV12 is the union of CV1 and CV2.
C1 and C2 should be effective class types (cf 7.6.4 [expr.mptr.oper] paragraph 3.
Also, should the relationship between those two classes be expressed as std::SameType or std::DerivedFrom requirements?
Is it possible to export a concept map template? The current wording suggests it is possible, but it is not entirely clear what it would mean.
Notes from the March, 2009 meeting:
Export is only useful for non-inline function templates and static data members of class templates, so it does not make sense to export a concept map template.
The grammar for constrained-template-parameter given in 13.2 [temp.param] paragraph 1 is:
The identifier naming the parameter is optional in the first two productions but not in the latter two productions. Is there a reason for this discrepancy?
There is currently no way to distinguish between templates that differ only by their requirements when naming a specialization. For example:
auto concept A<class T> {} auto concept B<class T> {} template<class T> requires A<T> void f(T); // #1 template<class T> requires B<T> void f(T); // #2 template <> void f(int); // Which one?
(See also issue 868.)
There is no way to specify a concept map in the name of a specialization. It would be useful to be able to do something like
void g(int n) { f<int : N::concept_map A<int>>(n); }
(See also issue 867.)
The requirements for matching of template template parameters and template template arguments given in 13.4.4 [temp.arg.template] do not mention constraints, leaving questions about whether examples like the following are well-formed:
auto concept C<class T> {}; template <template <C T> class U, C V> struct A{}; template <class T> struct X {}; A<X,int> ax; // Well-formed? template <template <class T> class U, C V> struct B{}; template <C T> struct Y {}; B<Y,int> by; // Well-formed? template <template <class T> class U> struct D{}; template <C T> struct Z {}; D<Z> dz; // Well-formed?
(See also issue 848.)
If the requirements of a constrained special member function are not satisfied, the result is that the member function is not declared (13.7.2 [temp.class] paragraph 5). This allows the special member function to be implicitly declared and defined, which will likely result in an ill-formed program or one with the wrong semantics.
Although the current wording does specify the outcome, it is not immediately apparent what the result of an example like the following should be:
template<std::ObjectType T> struct S { requires std::CopyConstructible<T> S(const S&) = default; };
The outcome (that S will have an implicitly-declared copy constructor that is defined as deleted in specializations in which T is not copy-constructible) would be clearer with the addition of two notes. First, it would be helpful if 13.7.2 [temp.class] paragraph 5, which currently reads,
A constrained member (11.4 [class.mem]) in a class template is declared only in class template specializations in which its template requirements (_N2914_.14.11.1 [temp.req]) are satisfied (_N2914_.14.11.1.1 [temp.req.sat])...
had a note or footnote to the effect,
When a constrained member of a template is a special member function, and when, in an instantiation, the member is not declared because its requirements are not satisfied, the special member is considered not to have been “explicitly declared” (i.e., the member is not user-declared); therefore a declaration may still be implicitly generated as specified in 11.4.4 [special].
The fact that the implicitly-declared copy constructor in this case is defined as deleted would be clearer if somewhere in the second list in 11.4.5.3 [class.copy.ctor] paragraph 5, which currently reads
...An implicitly-declared copy constructor for a class X is defined as deleted if X has:
a variant member with a non-trivial copy constructor and X is a union-like class,
a non-static data member of class type M (or array thereof) that cannot be copied because overload resolution (12.2 [over.match]), as applied to M's copy constructor, results in an ambiguity or a function that is deleted or inaccessible from the implicitly-declared copy constructor, or
a direct or virtual base class B that cannot be copied because overload resolution (12.2 [over.match]), as applied to B's copy constructor, results in an ambiguity or a function that is deleted or inaccessible from the implicitly-declared copy constructor.
there were a cross-reference to _N2914_.14.11.2.1 [temp.archetype.assemble], whose third paragraph reads,
If no requirement specifies a copy constructor for a type T, a copy constructor is implicitly declared (11.4.5.3 [class.copy.ctor]) in the archetype of T with the following signature:T(const T&) = delete;
The relationship of requirements with template aliases is not clear in the current wording. For example, something like
auto concept C{}; template <class T> struct A{}; template <C T> using B = A<T>;
is presumably allowed by the current wording of 13.7.8 [temp.alias] but, unless a good use case is presented, should probably be prohibited.
On the other hand, _N2914_.14.11 [temp.constrained] paragraph 5,
Within a constrained context, a program shall not require a template specialization of an unconstrained template for which the template arguments of the specialization depend on a template parameter.
might be considered to forbid an example like
template <C T> struct X {}; template <class T> using Y = X<T>; template <std::VariableType T> void f(Y<T>); // Error?
although it should probably be allowed. (Note, however, that 13.7.8 [temp.alias] paragraph 2,
When a template-id refers to the specialization of a template alias, it is equivalent to the associated type obtained by substitution of its template-arguments for the template-parameters in the type-id of the template alias.
could be viewed as allowing this example, depending on how the word “equivalent” is understood.)
The text should be amended to clarify the resolution of these questions. (See also issue 848.)
The current grammar does not allow indicating that a lambda is noreturn, because currently attributes in a lambda-expression appertain to the type of the conversion function/template, per 7.5.5 [expr.prim.lambda] paragraph 5.
Additional note (February, 2022):
This issue was addressed by the adoption of paper P2173R1 at the February, 2022 plenary.
Part of issue 2486 raised the question of whether static_cast should be permitted to cast a noexcept(false) function type to a noexcept function type. Presumably that would also involve changing 7.6.1.3 [expr.call] paragraph 6 to allow a call through the converted value, with undefined behavior resulting only if the called function actually exits via an exception.
CWG felt these questions should be addressed by EWG, so they were spun off into a separate issue.
Although there are implementations that use thunks for pointers to virtual member functions, it appears that such a technique is not permitted by the Standard. Concerns particularly include the requirements for complete types for parameter and return types at the point at which the member function pointer is formed.
Consider:
template<void *P> void f() { if constexpr (P) {} // #1 }
This is ill-formed at #1, because an expression of type void* cannot be converted to bool as a contextually converted constant expression.
[This suggestion was adopted as paper P2156R1 at the June, 2021 plenary.]
The standard attributes noreturn, carries_dependency, and deprecated all specify that they cannot appear more than once in an attribute-list, but there is no such prohibition if they appear in separate attribute-specifiers within a single attribute-specifier-seq. Since intuitively these cases are equivalent, they should be treated the same, accepting duplicates in both or neither.
Rationale (June, 2014):
EWG should determine the desired outcome for this question.
According to 11.8.4 [class.friend] paragraph 2,
Declaring a class to be a friend implies that the names of private and protected members from the class granting friendship can be accessed in the base-specifiers and member declarations of the befriended class.
A friend declaration is a member-declaration, but it is not clear how far the granting of friendship goes in a friend declaration. For example:
class c { class n {}; friend struct s; }; struct s { friend class c::n; // #1 friend c::n g(); // #2 friend void f() { c::n(); } // #3 };
In particular, if a friend function is defined inside the class definition, as in #3, does its definition have access to the private and protected members of the befriending class? Implementations vary on this point.
Additional note (June, 2021):
The initial opinion of CWG (at the September, 2013 meeting) was that “member declarations” was intended to be the English equivalent of the syntactic nonterminal member-declaration, including a friend declaration/definition inside the member-specification of a class, making #3 well-formed. However, recent discussion has expressed concern over the different treatment of in-class and out-of-class definitions of friend functions and observed that there is still divergence among implementations.
Rationale (November, 2021):
There are two lines of analysis that lead to opposite conclusions. The first is that a friend defined within the member-specification is written by the class author and is effectively part of the class, not subject to hijacking by other declarations, and thus should be afforded the same access as all other declarations that are part of the class. The second is that giving different access to a friend function based simply on whether it was defined inside or outside of its befriending class is confusing.
CWG considered this to be a design-level question, not simply to be determined by the usual relationship between English and grammar terms, and thus is asking EWG for its opinion.
The status of an example like the following is unclear:
struct S {
template <class T> friend void f(T) { }
};
template void f(int); // Well-formed?
A friend is not found by ordinary name lookup until it is explicitly declared in the containing namespace, but declaration matching does not use ordinary name lookup. There is implementation divergence on the handling of this example.
Notes from the March, 2018 meeting:
CWG did not come to consensus on the desired outcome and feels that the question should be addressed by EWG.