Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We do not have namespaces in C, which means any time we add functionality we basically have to square off with users.

Has anyone seriously proposed adding namespace to C? This feels like an obviously good addition to me. What’s the argument against it?



Namespaces are often called for especially by programmers used to C++ but doing C. They seem like such a simple feature to implement. I'm still not sure I like them:

- What is the benefit of using A::B::C versus A_B_C, really, in terms of avoiding name clashes? I can point out one immediate benefit of doing the latter - code is easier to read because line noise is reduced (perhaps subjective but I think it will be hard arguing the other way around).

- Is it a good thing that half the code will now refer to A::B::C using only B::C or even only C? Without assistance of a program with good semantic insight (a solid IDE, etc) this only makes identifier search harder.

- Namespaces "enforce" discipline in one way - you can be sure that the symbols at the binary level will be properly prefixed. But with only a little discipline programmers can do the prefixing themselves, and in return they can move code between files (/namespaces) more freely, which is good for refactoring.

And as a sibling commenter pointed out, one factor why namespaces get called for could be lack of understanding that we can hide the "guts" of any function (or variable) using "static" linkage. There is less discipline (prefixes..) needed for these functions because they are only visible in the current translation unit.


> What is the benefit of using A::B::C versus A_B_C, really, in terms of avoiding name clashes?

Many C programs do not use hierarchical names like `A_B_C` as they ideally should. They instead pick a short identifier that is (often incorrectly) believed to be distinct enough.

> Is it a good thing that half the code will now refer to A::B::C using only B::C or even only C? Without assistance of a program with good semantic insight (a solid IDE, etc) this only makes identifier search harder.

If you meant that the identifier search should be possible with just grep, yeah namespace makes it harder but it's not the only cause and this doesn't explain why many other languages with less overall IDE support than C/C++ have namespaces. And I believe it is possible to design a namespace system that only needs a ctags-level automation for proper identifier search, though I don't know how you feel about ctags.


> Many C programs do not use hierarchical names like `A_B_C` as they ideally should. They instead pick a short identifier that is (often incorrectly) believed to be distinct enough.

What is your list of name clashes that you experienced? How many real headaches did they give you? Or is it all a non-problem? Not a rhetorical question - I'm mostly working on smaller projects < 100KLOC.

> I don't know how you feel about ctags.

I use ctags from time to time when I have to, but I still don't like it when there are multiple namespaces that contain the same set of names which confuses navigation operations, even vim with ctags I think. Maybe even IDEs like Visual Studio, I'd have to check though what works and what doesn't.


> What is your list of name clashes that you experienced? How many real headaches did they give you?

Here are a dozen examples from Xlib, where Windows happened to choose the same short common names for many things: https://gitlab.freedesktop.org/xorg/proto/xorgproto/-/blob/m...

... and IIRC this list is no longer sufficient; you need to add to it in order to compile with the current Windows SDK. (Upon closer inspection, it was updated two weeks ago, so maybe it's fine... until the next Windows SDK update.)

I'll admit that this happens less often on Linux, where the system headers are smaller (and everybody uses X11 so there's historic reason to avoid all the short names that were gobbled up by Xlib in the '80s), but I've still run into occasional clashes between different library headers, or between legacy code and updated headers. eg. bool/Bool/BOOL are common collisions among pre-C99 libraries (including libraries that require C99 but don't remove the old names for backwards compatibility reasons), as well as min/MIN/max/MAX which still aren't in standard C as far as I can tell.

The headaches it gives are real, but not large in the grand scheme of things. The lack of defer (or otherwise standardized and cross-platform __attribute__(cleanup) ) is a bigger headache, for example.


I've programmed C for 35 years, and namespaces clashes have happened for me exactly once. There are two JSON libraries that use json_* as a prefix and have conflicting symbols. This actually caused a quite difficult to track down bug: https://bugzilla.redhat.com/show_bug.cgi?id=2001062

However this is not a reason to add namespaces. (In fact the bug was fixed using symbol versioning, an already existing feature of ELF.)


> What is your list of name clashes that you experienced? How much real headaches did they give you? Or is it all a non-problem? Not a rhetorical question - I'm mostly working on smaller projects < 100KLOC.

I too refrained from using C for that large software so I don't have many examples either, but in one case I was using TweetNaCl where you have to supply `extern void randombytes(unsigned char*, unsigned long long)` for CSPRNG and I had to rename it for some reason I can no longer recall.


> What is your list of name clashes that you experienced?

My experience is that sharing libraries is so wildly difficult In both C and C++ that code is not shared and wheels are reinvented. This has more to do with build systems than namespaces. But namespaces are a factor.


> many other languages with less overall IDE support than C/C++ have namespaces.

Because they are not C-style language. In Python or Java, for example, you need a separate file to create a package. In C++ you can add namespaces anywhere you want. This makes C++ namespaces harder to maintain even with automated tools.


Namespaces can be assigned alias to avoid colisions, good luck doing that to A_B_C.


Good point, but then again I'm not that sure it's a net benefit because I like everything in a codebase to refer to a given object using the exact same name.

I suppose aliasing is useful for widely used libraries, for example if a big software projects wants to include two different versions of the same library. Or Team A wants to rename their module but make it easier for other teams that use their module to follow suit.

All that can provide some ease of use in the short term but produces more mess to clean up in the long term. IMHO.

Real name clashes between two different libraries are quite unlikely, and namespaces would only solve that problem on the source code level, not on the binary / symbol level.


Namespaces can enforce good project structure. If you're using A::B::C and A::B::D you can be 100% that both C and D definitely live under B, and your editor can work with that as well.


> you can be 100% that both C and D definitely live under B

And that brings you what benefit exactly? I've seen a number of projects that are preoccupied about "proper nesting", while it is 99% bureaucratics and all those projects are still a mess.

The benefits of "living under this or that" are technically zero, and with respect to human factors are minimal given that 1) you can get most of the organizational benefits of namespaces by doing A_B_C, and 2) you can also add new members to proper namespaces from external files in most languages, including C++, so there really isn't a difference w.r.t "knowing for sure".


The benefit is a nice mental model of where what function lives (if you don‘t overdo it). This type of packaging also lends itself to module systems, a thing I desperately wish for in C


This type of "packaging" is completely orthogonal to module systems. It's merely a syntactic discipline that adds one more complication to deal with even where it has zero benefit.

You could say that the C/C++ systems already has "modules" if you look at object files and header files. Of course, they are a limited kind of modules because they are a bit low-level and C has the preprocessor problem, making it slow to "import" modules. But none of that has to do with namespaces.


Well, they enforce having a lot of project structure. Java and C# use their namespaces to ensure that everything useful is under five layers of pointless names like System.Collections.com.net.org.ArrayList.

I think not having namespaces may be an effective way to prevent enterprise programming from happening.


Yea, I‘ve got to be honest in that I don‘t understand Java or C# hierarchies.

Node.js and Rust do module systems just fine.


Yes people have floated the idea. I argued against it, I was asked at the wg14 meeting to explain my position and the following was my reply:

I was asked during the meeting why I think that name spaces are a bad idea, so here is my thoughts. I must warn you that the following may sound like an anti C++ rant, because well .... it is.

Name spaces have several bad aspects. The majority of them are that they make the code a lot less readable.

First, If I see a function call in source code, I cant assume what function is called. There may be a namespace somewhere above or in a header-file somewhere that changes the meaning of the code. It matters a lot for instance when you copy code from one file to another, and all of a sudden the code just means something completely different. Name spaces add state that changes the meaning of the code around it. When your code isn't doing what it should and you are trying to figure out why, you need to be able to trust that the code you are reading on screen is actually what you think it is. I dislike it for the same reason i dislike overloading: it makes it less clear what things you are reading really are.

(It may be argued that you already cant trust your eyes in C, because you can do lots of devious things with the per-processor, and to that I say, yes its bad enough, so lets not add more of it)

Name spaces splits the identifier in half, but its still a unique identifier.

my_module::create();

is and has to be as unique as:

my_module_create();

So what are we accomplishing? Do we have the same level of collisions? No, in fact we have more collisions because the parts can collide individually! if there are multiple my_module they will collide, and if there are multiple create in different namespaces they will collide!

my_module::create(); // one name space my_module::destroy(); // another name space

Collision! how about:

using namespace my_module; // namespace with a create using namespace my_other_module; // namespace with a create

crate();

Collision! If we instead write plain old C:

my_module_create(); my_module_destroy();

No collision.

my_module_create(); my_other_module_create();

No collision.

Beyond creating new opportunities for collisions, we create confusion by making it possible to hide half the identifier somewhere else. Yes, it can save some typing, but programing is never hard because of typing, programing is hard because its hard to understand what is going on. Namespaces solves the easy stuff, by making the hard stuff harder.

Its the age old, trap of C++ of adding something clever out of convenience, that turns out to be unclear and something that the programmer has to manage.

Further, namespaces encourage people to use short common names even further, because they think it saves them typing, and they think if it goes wrong they can always manage it with namespaces, and then they end-up having to manage something they shouldn't have to manage in the first place.

I have never had a namespace collision in 20+ years of C programming. Why? Because I use long descriptive names, that always start with where the functionality resides. Its readable, straight forward and works. Namespaces is a complicated system for managing a problem that should never happen in the first place, unless the user is very careless. We should not encourage carelessness.

I think C, should take some blame for this being a problem because the standard library have far shorter names then is advisable. It has made people think that names like: "jn" or "erf" are good examples of unique, clear and descriptive naming. The added wrinkle of "significant characters" has made people think that C mandates short names, something that no major implementation requires. There also seems to be a persistent belief that display technology has not yet evolved to the level that we can display more then 80 characters per line and that we therefor need to use short cryptic names for everything. This is also an argument from a different age.

Namespace collision in C almost never happens between 3rd party libraries, it is almost the exclusive problem of the standard library because it is so poorly named. If we want to fix this, by hiding the standard library behind a namespace, we might as well just add a c_standard_lib_ prefix to all functions, and keep the garbage that is namespaces out of C. Not that we would do either, since both world break backwards compatibility. So why would we add namespaces if it wouldn't be used by the standard lib, the very library we have name collision problems with? In fact if we added an optional namespace for the standard lib, all we would accomplish is to pollute the namespace with one more identifier.


    using namespace my_better_name = my_module;

    //....

    my_better_name::create(); // colision gone
Problem solved, better improve your C++ knowledge.


This addresses one (very uninteresting) "problem" shown by the person you are replying to. A problem which can be solved regardless of the existence of "using namespace a = b;" by simply not using the namespace. There are still all the other problems which are more concrete but which namespaces can't solve.

The C standard library should have (maybe as early as C11 or C99) picked stdc_ as a "namespace", reserved it, made it a warning to put things in it, and used it for everything going forwards.

What C needs is not namespaces, it's a module system, so that symbol visibility can be more tightly controlled.


Good luck with that.


> Yes, it can save some typing, but programing is never hard because of typing, programing is hard because its hard to understand what is going on.

I agree that optimizing number of keystrokes is a bad goal, but in my opinion that isn't the selling point of namespaces. The benefit is better readability. Code written with namespaces lets your eye focus the semantic name of the method, not extra character noise that was added as a form of manual name-mangling.

This is highly opinionated, of course, because there is a tradeoff with the ability to uniquely identify a method at a glance, as you mentioned. Which camp you fall into is likely to depend on what sort of software you write and whether you use an IDE or a text editor.


This is a misunderstanding of C++ principles, not an anti-C++ rant.

In C++, it is intentionally not straightforward to know what code gets called when a statement is executed - even without namespaces. We have:

* Function overloads

* Non-trivial (and not-built-in) implicit conversions

* Non-trivial copy and move constructors

* Template and concept resolution, which can be tricky

* Operator overloads, so that foo[n] or bar() may do something very different than what you expect.

And if you add in macros to the mix, you can really go crazy. For example, in this SO answer: https://stackoverflow.com/a/70328397/1593077 I explain how to implement an infix operator which lets us write:

    int x = whatever();
    if ( x is_one_of {1, 3, 42, 7, 69, 550123} ) {
        do_stuff(x);
    }
C is simply not that kind of language (well, macros notwithstanding I suppose).


This is nice if your goal is to come up with convoluted ways to say the same thing in fewer characters (ignoring the code that you had to add to enable the shorter version in the first place).

With your example, both a switch and simple if-else chains would be perfectly readable and also easy to write.


I agree with everything you say but want to add a caveat to the "long descriptive names". I find that there must be a balance where names are clear and unique enough while not tiring the eyes too much by making them read long boring repetitive prefixes all the time. There is a fine art to creating words and also sentences (statements).

By now if a "module prefix" is bigger than say 4-5 characters, I get unhappy and know I need to improve, to find a short mnemnonic. Same goes for local variables names, which I try keep at 1-5, rarely they get 10+ characters long. There is always this tension between names being "self-documenting" and "just long enough to remind of the purpose that was explained in a not too distant context". Variables that are frequently used should be shorter. Variables that have a clear intuitive use (like "x" or "i" which most often have very clear meaning with only little context added) should be shorter. Module names should _always_ be very short because (I assume) there are only few modules, and it's better to remember the purposes of a few modules together with their abbreviated names, than to have to read a long repetitive module name each other line. Function name suffixes (without the module prefix) should often be long because there are many different functions and not all of them can be cached with their meanings in the programmer brain - so I allow function names to be a little self-documenting typically.

I agree that the names in POSIX and C are too short and cryptic but think they made more sense in the context of Unixes of the 1970s when projects weren't as big as today.

I still try to stay within 80 or 100 columns because line length a readability / eye strain concern as well, but given that I'm in the 8 spaces camp I don't freak out anymore if there is the occasional 140 characters line and I'm to lazy to trim it down. Judicious insertion of linebreaks is most often useless busywork. Then again, some function signatures take too many arguments (glVertexAttribPointer/glBufferData... or the Win32 API come to mind) and inserting linebreaks in calls can improve readability sometimes.


> “I find that there must be a balance where names are clear and unique enough while not tiring the eyes too much by making them read long boring repetitive prefixes all the time.”

I have that problem, repetitive prefixes getting in the way of reading the code, and I’ve been doing some experiments with what could be described as “local namespacing”, only inside an expression or statement, with the prefix/suffix feature of the backstitch macro in my C preprocessor:

https://sentido-labs.com/en/library/cedro/202106171400/#back...

For instance, writing a program with libxml2, I write:

    Next(reader) @xmlTextReader...;
which gets translated to

    xmlTextReaderNext(reader);
and I find the first easier to read.

Whether that’s the case for others, I don’t know.

A longer example from the link I wrote above, this time for libuv:

    @uv_thread_...
        t hare_id,
        t tortoise_id,
        create(&hare_id, hare, &tracklen),
        create(&tortoise_id, tortoise, &tracklen),

        join(&hare_id),
        join(&tortoise_id);
Result:

    uv_thread_t hare_id;
    uv_thread_t tortoise_id;
    uv_thread_create(&hare_id, hare, &tracklen);
    uv_thread_create(&tortoise_id, tortoise, &tracklen);

    uv_thread_join(&hare_id);
    uv_thread_join(&tortoise_id);


In C you can use functions the file doesn't see, and since namespaces changes what symbol an expression refers to depending on what symbols exists in the program it will lead to a lot of strange scenarios. C++ doesn't allow this, and is the main way C++ isn't compatible with C.


Technically, they are a part of the attribute syntax in C23. Maybe a backdoor way to get them into a future standard.


I'm skeptical of it. I think it is a

std::opinion::bad_idea()

;)


maybe we could call the language with such extensions "C++"


Except that C++ doesn't serve that role, C++ is its own whole thing. I think "C but with namespaces and a method call syntax (and templates?)" would be a great language which would occupy a completely different space than C++.


if you introduce namespaces and method calls you have to introduce name mangling, to differentiate

    namespace foo { void x(); }
and

    namespace bar { void x(); }
and then you have to rely on compiler vendors to use the same mangling everywhere otherwise you end up exactly at the C++ position where there are multiple incompatible name mangling schemes, thus C code compiled with e.g. cl.exe would not be able to call a C function compiled with gcc (and FFIs wouldn't be able to either so you loose the "easy language bindings" "feature" of the operating system ABIs)


Namespaces by themselves aren't a reason why name mangling is needed. There isn't a technical reason why foo::x couldn't be the symbol name, literally. (Not sure what ELF and PE/COFF etc would think of these names in current implementations).

Name mangling is needed if you want to overload functions and qualify them only by the types (not names) of their in and out parameters. This is where it gets ugly on the binary level.


> There isn't a technical reason why foo::x couldn't be the symbol name, literally.

how do you do when you want to access your C function from a language which is binary-compatible with C but uses :: for something else? [a-zA-Z_][a-zA-Z0-9_]* identifiers are the only thing that the whole world more-or-less standardized around.

e.g. in fortran you can directly import a C function and call it. But "::" in the middle of a function name would very likely fail (I don't know enough fortran to tell for sure but given how the syntax looks...):

     subroutine foo (a, n) bind(c)
       import
       real(kind=c_double), dimension(*) :: a
       integer (kind=c_int), value :: n
     end subroutine foo
allows to call a "void foo(double *a, int n);" function defined in C. I imagine that

     subroutine ns::foo (a, n) bind(c)
would likely not work


> how do you do when you want to access your C function from a language which is binary-compatible with C but uses :: for something else?

Typically you'd do this by picking a different internal name for the function, and putting the external symbol in quotes.


Fair point. The C naming convention with flat names is quite conservative and consequently allows the names to be used directly from most other languages without any language-aware mapping / compatibility layer.


Rust doesn't allow function overloading. Name mangling is used there to implement linking multiple versions of an external module into the final build.


Although Rust doesn't have ad-hoc polymorphism, it does have polymorphism, and so it needs to track multiple versions of the same function anyway.

Take String::contains(). The equivalent feature in C++ is an overloaded function, so there are I believe it's three, versions of this function which take different parameters: A string, a char and a pointer to chars, they do similar things in practice but the compiler has no idea, there are just three functions with the same name. However the Rust feature is polymorphic, there are N versions of this function depending on what monomorphisations are chosen at compile time. The compiler knows these are all the same function, but the parameters have different types in practice at runtime and so the generated machine code is different. If your program can String::contains(cat_photo_jpg) then the compiler will produce the code to call it with your Jpeg type or whatever and it will need to keep it distinct from the version where it takes a String or a char or whatever.

Rust does this more often than C++ because it cannot choose ad hoc polymorphism. So if there should be a foo function which can take parameters bar, baz or quux, we need to decide either that bar, baz and quux all implement some trait which foo takes, or we need three separate functions foo_with_bar, foo_with_baz and foo_with_quux.


This question is very OT considering the what TFA is about, but…

What does the term ad-hoc polymorphism mean in your comment? Are you saying Rust does not have ad-hoc polymorphism because the syntax acts like the function in parametrically polymorphic and only once monomorphisation occurs are the different functions per argument type are generated is it N different functions. Does this line of thinking also say Haskell does not have ad-hoc polymorphism?

And in case any reader wonders, I am genuinely interested in your response to these questions (i.e. this isn’t bait nor rhetorical).


Indeed; I am wondering why go does not consider traits in rust a form of ad hoc polymorphism?


Ad hoc polymorphism is distinct from parametric polymorphism in that rather than parameter type itself being a parameter, there are just an arbitrary (ad hoc determined) set of types allowed.

The C++ standard library defines contains(x) so that it'll work for a string x, a single char x or a pointer to chars x. Nothing else can work, those are the arbitrary list of types which work, that's ad hoc polymorphism.

A separate implementation is provided for each of those three cases, which is why if I've got a JPEG, I can't instead ask if the string contains the JPEG, that's not one of the three implementations provided.

The Rust standard library just defines the type of x in terms of a trait, Pattern. So, my Jpeg type can just implement Pattern and now I can ask if Strings contain the Jpeg and that works because contains delegates the matching problem to Pattern and Jpeg implements Pattern -- only the specific idea of "contains" as distinct from "begins with" or "split" or a dozen other functions is handled by the contains function.

I am not a (serious) Haskell programmer but I would argue that Haskell also lacks ad hoc polymorphism here as I understand it (and to be clear: I think this is in general a good or at worst reasonable choice)

The place where ad hoc really shines is when you're got a function that say, makes complete sense with exactly two (or maybe three, fewer than two is irrelevant and more than three seems unlikely to provide reasonable ergonomics) specific types, which otherwise have nothing useful in common.

For example suppose I've got a whole variety of bird types, Goose, Chicken, Ostrich, Penguin, Sparrow and so on, and I've got this function thunk() and I realise that, oddly it makes sense to thunk a Sparrow or an Ostrich, but literally no other birds at all. I think hard about it, but the best I can come up with to describe this property is "Thunkable" since all it really means is you can thunk() them. And there's no "content", there's no special implementation work in "Thunkable" that shouldn't live in thunk() for maintenance anyway. In this case ad hoc polymorphism is great because it saves needing to make this stupid "Thunkable" trait / type class / type-of-types / interface just to group together Sparrow and Ostrich for this single purpose.

But I'd argue the "Thunkable" case is rare, and C++ has a lot of cases where ad hoc polymorphism was the wrong choice and they fell into it.

I mentioned three types, really C++ contains() does only two things but it spells one of them two ways for 20+ year old reasons. It can do strings (as std::string, but also via C's char * type) and it can do a single character char (one code unit, so, no poop emoji).

Rust provides four Patterns, they are a string reference, a single char (a Unicode scalar, so yes a poop emoji works), a slice of chars (any of the chars matches), or a predicate which matches characters.

Now, do any of those four feel like things you'd definitely never want in C++? Because if C++ wanted all of them that's now five overloads for contains. And five overloads for find, and for every other matching function, on every string or string-like type...

I believe ad hoc polymorphism is so rarely what you really want, and yet it so often detracts from the rest of the language facilities that "We don't have that" is a sensible language design choice, same as for multiple inheritance.


I think my issue is with your definition. Type classes in Haskell are the implementation for ad-hoc polymorphism, and likewise traits in Rust. I think the definition you are using is not the commonly given one and that is where my confusion came from.

Typeclasses (in a Haskell context) we’re formalized in the Wadler and Blott paper “How to make ad-hoc polymorphism less ad hoc”. And for Rust, in [1] Traits are explicitly stated to be the method in which Rust achieves ad-hoc polymorphism.

1: Klabnik, Steve; Nichols, Carol (2019-08-12). "Chapter 10: Generic Types, Traits, and Lifetimes". The Rust Programming Language (Covers Rust 2018)


I didn't know either of these things, I guess now I have some reading to do, so thanks.

Edited to add: Hmm. Actually though, surely that second reference is just "The Book" as it's called, did it actually say it's about ad hoc polymorphism? Because I've read this section of The Book, although I hadn't in 2018, and it doesn't mention "Ad hoc polymorphism".

What's there now (I just checked) is a description of how you'd approach this problem in Rust, using traits, but doesn't claim this is ad hoc polymorphism, and sure enough doesn't involve an arbitrary set of types which is the sort of the point of why "ad hoc" is there in the name.


The standard definition of ad-hoc polymorphism imputed to Strachy, where:

Strachey chose the adjectives ad-hoc and parametric to distinguish two varieties of polymorphism [Str67]. Ad-hoc polymorphism occurs when a function is defined over several different types, acting in a dif- ferent way for each type. A typical example is overloaded multiplication: the same symbol may be used to denote multiplication of integers (as in 33) and multiplication of floating point values (as in 3.143.14).

That is from the Wadler paper where typeclasses are formalized. Typeclasses and Traits are the implementation details for those function symbols that vary in implementation for each type. The restrictions on the types (like which types implement a trait or have a type class defined for it) are the types the symbol can be used on.

You seem to be focusing on the ‘arbitrary set of types’ point, but the only connection between the types accepted by Rust’s generic functions (which are functions that accept a type provided it has some trait) are that they take types which have an impl for that trait.

I think there is a bit of ambiguity regarding the term ad-hoc polymorphism at play. You seem to think the trait/typeclass implementation of ad-hoc polymorphism (which was invented to formalize a well behaved class of ad-hoc polymorphic functions) makes it no longer ad-hoc. My position echos Wadler, it’s still ad-hoc but just less ‘ad-hoc’ (i.e. more formalized).


It comes to this phrase about a function "acting in a different way for each type".

In C++ there are literally three separate implementations of std::string's contains method for three type signatures. This is pretty clearly what is being discussed as "acting in a different way".

In Rust there's just one, here's the entire function body of contains: pat.is_contained_in(self)

OK, well that's just buck passing right? Clearly this is_contained_in() method on Pattern is really just the contains() implementation, we're passing the work to this function that as you point out needs to be implemented by each of the matching types for Pattern.

Except, wait, Pattern actually defines is_contained_in(haystack), thus: self.into_searcher(haystack).next_match().is_some()

Sure enough Pattern implementations although they're not forbidden from implementing is_contained_in themselves, do not in fact do that, they just implement into_searcher. Our hypothetical Jpeg type can provide a suitable into_searcher implementation which results in a Searcher for the Jpeg somehow, without knowing what contains() or split_once() or trim_start_matches() do, and now they will work on Jpegs.

So the "acting in a different way for each type" for contains() ends up only being because of details about the inner behaviour of that type, which is exactly parametric polymorphism so far as I can see.


Rust breaks parametricity via functions like size_of and in unstable features such as specialization.


You do realistically need a kind of name mangling yes, if you want to keep using [a-zA-Z0-9_] as the set of characters in symbol names. But without overloading or other fancy stuff, you could have a super simple name mangling scheme: separate the names by a double underscore "__".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: