• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

I don't know, but I come from a school of thought where I'm not going to ask you how to write a hashtable unless one of the things you'll be doing for me is writing a hashtable. (And with that answer, thank goodness you wouldn't be writing a hashtable!) Given that you'll be using standard tools and libraries that already include working hashtables, I'd rather ask you about things you actually will be doing, which could include knowing when and where to use one.

Granted, I know at some companies, you'll actually be the one writing the tools and libraries, and therefore, this is something you'll need to know or be able to learn. If you're like me, meaning you're working at a bank and/or writing line of business apps, then the internal implementation details are irrelevant, the pragmatic usage details are what's important.
 
I'm talking about some dirty things you can do with git, including rewriting history and rebasing.

Admittedly I use git in a pretty normal way, but I find rebasing to be fairly intuitive and an extremely useful operation. Is there some backdoor I'm not aware of that allows you to do really gross things with it? Do you have an example of some kind of rebasing that I shouldn't do?

For example, if I'm working on a patch that I want to merge to another branch that is at a different point in time, I switch to that branch, checkout an old revision (the same one that mine is based on top off), then rebase to the head of that branch. This is an operation that has been notoriously painful in other VCS that I've used.

I assume there's some other way to use rebasing that is bad, but maybe I just haven't seen it.

I'd also be curious for an example of rewriting history that you consider bad. For example, I often work on multiple patches in parallel, often multiple patches are stagnating while I wait for them to be reviewed. But later patches depend on functionality from earlier patches. So I put the earlier patch up for review, then continue working. When I get review feedback, I need to go back and modify the original patch. This is easy by rebasing to -- I assume -- rewrite history. I commit my patch to tip, run "git rebase -i", move the patch to different location in the commit sequence, mark it as a fixup so it merges into the previous patch, then git changes all the subsequent patches accordingly.

Obviously you never do this against the remote, but it's extremely useful to be able to do this thing against your local working copy.
 

YoungOne

Member
Sorry if it's a dumb question, but I'm looking to get into learning java. Would it be alright to install the jdk on my everyday use/browsing computer?
 

Slavik81

Member
Why would 2d graphics be in the langauge spec
Part of it is that they want something so people learning C++ can easily build something with graphical output. Also, for simple graphical programs, like maze generators. I think it's basically adding Cairo as a standard library component.
 
Part of it is that they want something so people learning C++ can easily build something with graphical output. Also, for simple graphical programs, like maze generators. I think it's basically adding Cairo as a standard library component.

Also worth mentioning that anyone can submit a proposal. I have very little confidence this will make it through committee
 
Somehow, I would like to have public unit tests, and (possibly a bit hidden) unit tests that check some implementation details. Of course, those tests are not future proof like the others, but I think they have their value for development purposes.

Resurrecting an old post. Last time I said there wasn't really a good way to do this. I was kinda wrong. I had this come up the other day and thought of a pretty decent solution.

First, instead of making the implementation details private, make them protected. So you've got something like this:

Code:
// Foo.h
class Foo {
public:
  doSomething();
protected:
  int x;
};

Then, in the file where you write your unit tests, inherit from it and put a using declaration to bring it into public scope.

Code:
// TestFoo.cpp
class FooDetails : public Foo {
public:
  using Foo::x;
};

TEST(TestFoo, X) {
  FooDetails F;
  EXPECT(F.x == 7);
}

Using FooDetails you can change the access level of every protected member of the base class to make it public, and this is limited entirely to the test file, so normal code is still restricted.
 

cyborg009

Banned
So guys so I was interviewing for this job and they wanted me to create some views for them from their database. They use MySQL but I'm only familiar with SQl server. I wanted to create a indexed views for them but that doesn't seem like it would work in MySQL. So is materialized views the way to go?
 

Makai

Member
Lack of algebraic data types has really taken a toll on my opinion of most major languages. How do we work like this?
My favorite feature. Makes state machines so comprehensible and bugproof. Then I go to work and use C# - and nobody cares when I explain what could be.

---

http://www.neogaf.com/forum/showthread.php?t=1192595

This thread interests me, but it's way too old to bump. Obviously, gameplay bugs are caused by game programmers, not game engines - but I've also seen bug tickets resolved with, "can't do nothing cause Unity." And Unity demands a certain level of spaghetti so I can believe the guy who said every RPG that uses it is filled with inexplicable bugs. At the game jam I went to today, I overheard three different teams trying to figure out how to make Unity and git work together - no idea how they finished Recore. And OP's really not kidding about the performance - AAA games with massive asset loads should not run better than 2D sprite games. I see GC hitches in nearly every indie game I play now - Shenzhen I/O could have gotten away with it without me noticing but I think they went with C++ or something, anyway. I don't think the solution for most developers is necessarily "switch to Unreal" but they should probably do something, since cpus are stagnant
 

Somnid

Member
My favorite feature. Makes state machines so comprehensible and bugproof. Then I go to work and use C# - and nobody cares when I explain what could be.

Seriously. I had to build a typical query provider that queries a database for a web service (as I have many times) and just from a design standpoint I badly wanted a return type that was either an db error or a result set because it makes so much sense. But no, you have to have the result set as the return type and throw the db error. It's gross, nothing enforces the caller catch that exception and in fact it's not caught until the global exception handler which maps it to a 500 response but there there's no telling what type of exception could have happened or how it's wrapped and it's just leads to a big non-descriptive catch all mess because downstream could have thrown anything. That and null propagation is messy and easily the single biggest cause of runtime exceptions.
 
Seriously. I had to build a typical query provider that queries a database for a web service (as I have many times) and just from a design standpoint I badly wanted a return type that was either an db error or a result set because it makes so much sense. But no, you have to have the result set as the return type and throw the db error.

Make a class called ErrorOr<T>, return that. Assert if you try to access the T without checking the error.

It's gross, nothing enforces the caller catch that exception .

Careful what you wish for, you might end up with java exceptions
 

Somnid

Member
Make a class called ErrorOr<T>, return that. Assert if you try to access the T without checking the error.

So I'm assuming you start with a class looks something like:

Code:
class ErrorOr<T> {
  public T Result { get; set; }
  public Exception Exception { get; set; } 
}

How do enforce checking the exception? And assert is obviously still problematic because it's not a compile time. If I have to cover it with unit tests to assure it, then it's not much of a gain.

My thought was to use an empty interface and have both implement it and use something like Function C# to pseudo-destructure it, but then that gets into object hierarchies as I'd have to wrap everything.
 
So I'm assuming you start with a class looks something like:

Code:
class ErrorOr<T> {
  public T Result { get; set; }
  public Exception Exception { get; set; } 
}

How do enforce checking the exception? And assert is obviously still problematic because it's not a compile time. If I have to cover it with unit tests to assure it, then it's not much of a gain.
My thought was to use an empty interface and have both implement it and use something like Function C# to pseudo-destructure it, but then that gets into object hierarchies as I'd have to wrap everything.


You don't need a compile time check for this to be effective. All you need is a runtime check which asserts if you have never checked whether the operation succeeded.

You don't need to worry about writing a test to test this specific code path, because using the return value of the query is going to happen 100% of the time. So, this codepath is already covered by every other test in your system which runs this query. Consider this class:

Code:
class ErrorOr<T> {
  public ErrorOr(T R) {
    Result_ = R;
    Checked_ = false;
    Error_ = null;
  }

  public ErrorOr(Exception E) {
    Result_ = null;
    Error_ = E;
    Checked_ = false;
  }

  public T Result {
    get {
      System.Diagnostics.Debug.Assert(Checked);
      return Result_;
    }
  }

  public bool Succeeded {
    get {
      Checked_ = true;
      return Error_ != null;
    }
  }

  public Exception Error { get { return Error_; } }

  public bool Checked { get { return Checked_; } }

  private bool Checked_;
  T Result_;
  Exception Error_;
}

Again, this is not as good as a compile time check, but it's definitely better than throwing from inside the function and returning the result on success, for a number of reasons.

1) it documents in the interface of the function that an error might occur.
2) it forces the user to decide how to handle the error, eliminating the all too common propagate-up-to-main.
3) It forces the user to check the error even if the operation succeeded.

#3 is subtle, but important. Because of 100% of calls to a function returning this class must be followed by an error check, you can skip writing the unit test. As long as this codepath is executed by *any* test, even a test that passes (which hopefully you already have some of), you will ensure that the failure case is also handled.

So again, obviously not as good as compile time checking, but still better than just throwing from inside the body IMO.

I guess you could take this one step further and do something like this:

Code:
class ErrorOr<T> {
  public ErrorOr(T R) {
    Result_ = R;
    Error_ = null;
  }

  public ErrorOr(Exception E) {
    Result_ = null;
    Error_ = E;
  }

  public void Handle(ErrorHandler EH, SuccessHandler SH) {
    if (Error_ != null)
      EH(Error_);
    else
      SH(Result_);
  }

  T Result_;
  Exception Error_;
}

and require the user to pass some lambdas. Seems a bit like overkill though if you already have good test coverage.
 

Somnid

Member
You don't need a compile time check for this to be effective. All you need is a runtime check which asserts if you have never checked whether the operation succeeded.

You don't need to worry about writing a test to test this specific code path, because using the return value of the query is going to happen 100% of the time. So, this codepath is already covered by every other test in your system which runs this query. Consider this class:

and require the user to pass some lambdas. Seems a bit like overkill though if you already have good test coverage.

That's about how I would have implemented it. It doesn't make me happy but it's still probably better than throwing. There's subtle errors here if the consumer tried to reuse the object so there's still an amount of discipline going into it. Curious, is this actually a pattern you've used in production code?
 
That's about how I would have implemented it. It doesn't make me happy but it's still probably better than throwing. There's subtle errors here if the consumer tried to reuse the object so there's still an amount of discipline going into it. Curious, is this actually a pattern you've used in production code?

The lambdas idea is more or less the equivalent of what boost::variant does. This should be in c++17 but for now it's just boost. You could also make it an interface instead of a lambda. Personally I like the first pattern better (checking and asserting) because this makes it easy to do check/return

I've used the ErrorOr paradigm extensively in production code, albeit in c++ where we have exceptions turned off. It's more or less what you would get if you take the best parts of error codes and exceptions and combined them. It's slightly better in c++ because you can also assert if you go out of scope without checking, meaning all exits out of the function requie you to check the value
 

Koren

Member
Resurrecting an old post. Last time I said there wasn't really a good way to do this. I was kinda wrong. I had this come up the other day and thought of a pretty decent solution.
Yes, that's a solution I like... I don't often use private, so I don't even have to change anything.

Many thanks for sharing, that's a solution I'll probably use often.
 
Resurrecting an old post. Last time I said there wasn't really a good way to do this. I was kinda wrong. I had this come up the other day and thought of a pretty decent solution.

First, instead of making the implementation details private, make them protected. So you've got something like this:

Code:
// Foo.h
class Foo {
public:
  doSomething();
protected:
  int x;
};

Then, in the file where you write your unit tests, inherit from it and put a using declaration to bring it into public scope.

Code:
// TestFoo.cpp
class FooDetails : public Foo {
public:
  using Foo::x;
};

TEST(TestFoo, X) {
  FooDetails F;
  EXPECT(F.x == 7);
}

Using FooDetails you can change the access level of every protected member of the base class to make it public, and this is limited entirely to the test file, so normal code is still restricted.

Woah, why use inheritance and protected, which changes the semantics? Can't you just use a public inner class and then call its methods in tests?
 
Woah, why use inheritance and protected, which changes the semantics? Can't you just use a public inner class and then call its methods in tests?

I'm not sure I follow. But do you mean this?

Code:
class Foo {
public:
  class Test {
  public:
    Test(Foo &F) : F(F) {}
    int getX() const { return F.x; }
  private:
    Foo &F;
  };
private:
  int x;
};

That's honestly pretty ugly if so. But beyond that, testing code is less brittle if it's decoupled from the implementation that is being tested. This couples them for no good reason. Moreover, suppose you wanted your test class to be able to expose write-access to the internals. You would have to expose write-access from this public inner class. And now any user of the class could subvert the member access system by declaring an instance of Foo::Test to write to an arbitrary Foo instance for them.

With the approach I suggested, the test code is completely isolated to the test file. Yes, user code could do the same thing, but at least it wouldn't give them the ability to modify an arbitrary Foo instance, it would have to be an instance of their FooDetails subclass.

Also, while it changes the "semantics", it doesn't change the runtime behavior, so it's not like your test code is testing something different than you're running.

(That said, if you had something else in mind, I might just not be picking up on it)
 
I'm not understanding something about list slicing syntax in python.

I thought these two versions of the code would print the same thing:

Code:
arr4 = [4, 5, 6]
def poorSub(arr):
	sum = 0
	for i in range(0, len(arr)):
		for j in range(i, len(arr)):
			tempsum = 0
			for k in range(i, j+1):
				tempsum += arr[k]
				print("At {} we do {} + {}".format(i, tempsum - arr[k], arr[k]))
			if (tempsum > sum):
				sum = tempsum
	return sum
	
	
print(poorSub(arr4))

Code:
arr4 = [4, 5, 6]
def poorSub(arr):
	sum = 0
	for i, element in enumerate(arr):
		for j, elem in enumerate(arr[i:]):
			tempsum = 0
			for k, el in enumerate(arr[i:j+1]):
				tempsum += arr[k]
				print("At {} we do {} + {}".format(i, tempsum - arr[k], arr[k]))
			if (tempsum > sum):
				sum = tempsum
	return sum
	
	
print(poorSub(arr4))

the console log for the top code is:

Code:
At 0 we do 0 + 4
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 9 + 6
At 1 we do 0 + 5
At 1 we do 0 + 5
At 1 we do 5 + 6
At 2 we do 0 + 6
15

and for the bottom
Code:
At 0 we do 0 + 4
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 9 + 6
At 1 we do 0 + 4
15

What am I doing wrong here?
 
I'm not understanding something about list slicing syntax in python.

I thought these two versions of the code would print the same thing:

Code:
arr4 = [4, 5, 6]
def poorSub(arr):
	sum = 0
	for i in range(0, len(arr)):
		for j in range(i, len(arr)):
			tempsum = 0
			for k in range(i, j+1):
				tempsum += arr[k]
				print("At {} we do {} + {}".format(i, tempsum - arr[k], arr[k]))
			if (tempsum > sum):
				sum = tempsum
	return sum
	
	
print(poorSub(arr4))

Code:
arr4 = [4, 5, 6]
def poorSub(arr):
	sum = 0
	for i, element in enumerate(arr):
		for j, elem in enumerate(arr[i:]):
			tempsum = 0
			for k, el in enumerate(arr[i:j+1]):
				tempsum += arr[k]
				print("At {} we do {} + {}".format(i, tempsum - arr[k], arr[k]))
			if (tempsum > sum):
				sum = tempsum
	return sum
	
	
print(poorSub(arr4))

the console log for the top code is:

Code:
At 0 we do 0 + 4
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 9 + 6
At 1 we do 0 + 5
At 1 we do 0 + 5
At 1 we do 5 + 6
At 2 we do 0 + 6
15

and for the bottom
Code:
At 0 we do 0 + 4
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 0 + 4
At 0 we do 4 + 5
At 0 we do 9 + 6
At 1 we do 0 + 4
15

What am I doing wrong here?

Haven't tested this out, but it looks to me like you're using enumerate wrong. The first index of enumerate is *always* 0. If you enumerate the values (7, 8, 9) you get (0,7), (1, 8), (2, 9).

But your top and bottom treat k in the expression "for k, el in enumerate(...)" as equivalent to the k in the expression "for k in range(i, j+1)". In the former, k will start from 0, but in the latter, k will start at i.

You need to be doing "tempsum += el" and "tempsum - el" instead of "tempsum += arr[k]" and "tempsum - arr[k]"
 
Haven't tested this out, but it looks to me like you're using enumerate wrong. The first index of enumerate is *always* 0. If you enumerate the values (7, 8, 9) you get (0,7), (1, 8), (2, 9).

But your top and bottom treat k in the expression "for k, el in enumerate(...)" as equivalent to the k in the expression "for k in range(i, j+1)". In the former, k will start from 0, but in the latter, k will start at i.

You need to be doing "tempsum += el" and "tempsum - el" instead of "tempsum += arr[k]" and "tempsum - arr[k]"

Huh. Yeah I did make that correction (swapping arr[k] with el). It fixes some of the lines of output. But yeah I need to read up on enumerate I guess and figure out how to get the same amount of output.

Code:
At 1 we do 0 + 5
At 1 we do 5 + 6
At 2 we do 0 + 6

are the three lines of input I lose.
 

Koren

Member
Huh. Yeah I did make that correction (swapping arr[k] with el). It fixes some of the lines of output. But yeah I need to read up on enumerate I guess and figure out how to get the same amount of output.
The issue is not with enumerate, but with the slice.

Slicing gives you a NEW list. So when enumerate gives you an integer, it represent the position in the new list (the sliced one), not the original one.

If you have L = [ "a", "b", "c", "d", "e" ]

enumerate(L) produces
0, "a"
1, "b"
2, "c"...

enumerate(L[1:] produces
0, "b"
1, "c"
2, "d"...

Notice that the integers are not the indexes of the element in L (but thoses in L[1:])

That's a common one...
 

Koren

Member
Yup.
Code:
for j, elem in enumerate(arr):
gets me what I want.

Actually, I'd say it doesn't exactly do the same, even if it prints the same lines... You'll study cases where j < i, and only the fact that arr[i:j+1] is an empty list "avoid" the inner loop.

I would rather do, if you REALLY want to slice:
Code:
arr4 = [4, 5, 6]
def poorSub(arr):
    sum = 0
    for i, element in enumerate(arr):
        for j, elem in enumerate(arr[i:]):

        j+=i # <- add i to j

        tempsum = 0
        for k, el in enumerate(arr[i:j+1]):

            k+=i # <- add i to k

            tempsum += arr[k]
            print("At {} we do {} + {}".format(i, tempsum - arr[k], arr[k]))
            if (tempsum > sum):
                sum = tempsum
    return sum

That way, you get exactly the same loops as without the slices, and j and k have exactly the same meaning/values, so you can use arr[k] or el, as you wish (although I don't see the point of using enumerate if you don't use el)

It's a case where slicing is not that great an idea, though... Slicing implies a copy (of the list, not of the elements in the list), and here a copy is really wasting your time (although not changing the complexity).

But since I suppose you're trying to find max_sub_sum in O(n^3), then O(n^2), then O(n) probably as a training, optimizing the O(n^3) solution is probably pointless...

I'd advice against using "sum" as a name, though. It's an available function (and there's not that many in Python) and in fact, it could even be useful here... The stupid O(n^3) solution is more easily readable this way, I think:

Code:
def poorSub(arr):
    res = 0
    for i in range(len(L)-1) :
         for j in range(i+1, len(L)):
            res = max(res, sum(arr[i:j+1])
    return res
 
Actually, I'd say it doesn't exactly do the same, even if it prints the same lines... You'll study cases where j < i, and only the fact that arr[i:j+1] is an empty list "avoid" the inner loop.

I would rather do, if you REALLY want to slice:
Code:
arr4 = [4, 5, 6]
def poorSub(arr):
    sum = 0
    for i, element in enumerate(arr):
        for j, elem in enumerate(arr[i:]):

        j+=i # <- add i to j

        tempsum = 0
        for k, el in enumerate(arr[i:j+1]):

            k+=i # <- add i to k

            tempsum += arr[k]
            print("At {} we do {} + {}".format(i, tempsum - arr[k], arr[k]))
            if (tempsum > sum):
                sum = tempsum
    return sum

That way, you get exactly the same loops as without the slices, and j and k have exactly the same meaning/values, so you can use arr[k] or el, as you wish (although I don't see the point of using enumerate if you don't use el)

It's a case where slicing is not that great an idea, though... Slicing implies a copy (of the list, not of the elements in the list), and here a copy is really wasting your time (although not changing the complexity).

But since I suppose you're trying to find max_sub_sum in O(n^3), then O(n^2), then O(n) probably as a training, optimizing the O(n^3) solution is probably pointless...

I'd advice against using "sum" as a name, though. It's an available function (and there's not that many in Python) and in fact, it could even be useful here... The stupid O(n^3) solution is more easily readable this way, I think:

Code:
def poorSub(arr):
    res = 0
    for i in range(len(L)-1) :
         for j in range(i+1, len(L)):
            res = max(res, sum(arr[i:j+1])
    return res

Thanks for the advice. And yeah this is one of those "find O(n^3), etc..." assignments." I'll change the name of sum. I should probably read up on the built-in python functions.
 

Koren

Member
I should probably read up on the built-in python functions.
Well, if you're interested in this, the official documentation is really decent for those, and there's only about 50.

- type "constructors" (slice, object, int, str, bool, bytes, bytearray, float, complex, dict, tuple, list, range, frozenset, set, memoryview)
- a couple math functions (abs, divmod, pow)
- a couple really useful iterables builders (enumerate, zip, reversed, sorted)
- some also really useful functions that act on iterables (min, max, sum, any, all)
- duck typing sugar coating (len, next, iter)
- some I/O (open, input, print, format (well, somehow))
- a couple converters (bin, hex, oct, ascii, ord, chr, repr, hash (kinda))
- some functional remains that Guido would like to suppress ^_^ (map, filter)
- some instrospection related functions (dir, locals, globals, isinstance, issubclass, callable, super, hasattr, getattr, setattr, delattr, help)
- a couple decorators for classes (classmethod, staticmethod, property)

https://docs.python.org/3/library/functions.html

There's a world between min and property, but the page probably worth a quick read if you're into Python and have time. Just reading about enumerate, zip, reversed & sorted can spare you a LOT of time. Same for any/all.
 
So I've been dabbling in .NET Core and trying to use NPM + Grunt for front end dependencies.

Why am I supposed to create a task for each npm package to copy it from "node_modules" to wwwroot/lib? I don't get the current state of affairs for modern web development :(
 

Jokab

Member
Well, if you're interested in this, the official documentation is really decent for those, and there's only about 50.

- type "constructors" (slice, object, int, str, bool, bytes, bytearray, float, complex, dict, tuple, list, range, frozenset, set, memoryview)
- a couple math functions (abs, divmod, pow)
- a couple really useful iterables builders (enumerate, zip, reversed, sorted)
- some also really useful functions that act on iterables (min, max, sum, any, all)
- duck typing sugar coating (len, next, iter)
- some I/O (open, input, print, format (well, somehow))
- a couple converters (bin, hex, oct, ascii, ord, chr, repr, hash (kinda))
- some functional remains that Guido would like to suppress ^_^ (map, filter)
- some instrospection related functions (dir, locals, globals, isinstance, issubclass, callable, super, hasattr, getattr, setattr, delattr, help)
- a couple decorators for classes (classmethod, staticmethod, property)

https://docs.python.org/3/library/functions.html

There's a world between min and property, but the page probably worth a quick read if you're into Python and have time. Just reading about enumerate, zip, reversed & sorted can spare you a LOT of time. Same for any/all.
See, I don't get this. They're super-useful functions. Think I read somewhere that he refuses to implement reduce too, which would also be handy.
 

Koren

Member
See, I don't get this. They're super-useful functions. Think I read somewhere that he refuses to implement reduce too, which would also be handy.
Well...

Guido *hates* functionnal programming (unless I'm mistaken). Reduce *is* a builtin function in Python, it just migrated to functools in Python3k, but is still available.

And somehow, while I like functionnal, I find map and filter useless in Python, I've used them only on really really special cases.

In Python, map should be written
Code:
[ f(x) for x in L ]
and filter
Code:
[ x for x in L if f(x) ]

Reduce is the one that can be useful if you want to avoid an explicit loop (or a hideous onliner). Even if most cases of folding are better written in another way (I fold daily in ML, less than once a month in Python).

Granted, map(f, L) is shorter, but Python's comprehensions grant you a free lambda
Code:
[ x**2 for x in L ]
is shorter than map(lambda x:x**2, L). It's even better if you do map+filter at the same time.

Also, you can produce generators, which is better than building whole lists...

Beside code golf, map and filter probably remain as builtin only for compatibility reasons with existing code...


Guido's position on the matter (before Py hon 3k):
http://www.artima.com/weblogs/viewpost.jsp?thread=98196

I agree with him that map and filter could have been removed. But I would have missed reduce (I think he was speaking for himself when he was saying he had troubles with it). I'm fine with it removed from builtins though (I'd say they may have done the same with map/filter).

I also think there's many cases where lambda is useful (yes, you can define a local function to use as a key for sort, but is it really better?), so I'm glad it stayed.
 

upandaway

Member
Approaching the end of my Programming Languages class which was like 1/4th compilers and 3/4ths functional programming (and uh, one lecture of logical programming). I have to say functional programming is really fun, I was writing stuff a lot faster than I usually do and they mostly worked on my first try (I might be the exception though because all my friends had a tough time adapting to it)

It made me think that functional programming might be really well suited for deep learning from a distributed computing angle, but I haven't found any work in that direction
 
Well...

Guido *hates* functionnal programming (unless I'm mistaken). Reduce *is* a builtin function in Python, it just migrated to functools in Python3k, but is still available.

And somehow, while I like functionnal, I find map and filter useless in Python, I've used them only on really really special cases.

In Python, map should be written
Code:
[ f(x) for x in L ]
and filter
Code:
[ x for x in L if f(x) ]

Reduce is the one that can be useful if you want to avoid an explicit loop (or a hideous onliner). Even if most cases of folding are better written in another way (I fold daily in ML, less than once a month in Python).

Granted, map(f, L) is shorter, but Python's comprehensions grant you a free lambda
Code:
[ x**2 for x in L ]
is shorter than map(lambda x:x**2, L). It's even better if you do map+filter at the same time.

Also, you can produce generators, which is better than building whole lists...

Beside code golf, map and filter probably remain as builtin only for compatibility reasons with existing code...


Guido's position on the matter (before Py hon 3k):
http://www.artima.com/weblogs/viewpost.jsp?thread=98196

I agree with him that map and filter could have been removed. But I would have missed reduce (I think he was speaking for himself when he was saying he had troubles with it). I'm fine with it removed from builtins though (I'd say they may have done the same with map/filter).

I also think there's many cases where lambda is useful (yes, you can define a local function to use as a key for sort, but is it really better?), so I'm glad it stayed.

map and filter don't return full lists in Python 3:
Code:
>>> map(lambda x: x + 1, [1, 2, 3])
<map object at 0x7fd20384f470>
 

Megasoum

Banned
So I've learned some C++ a long time ago (almost 10 years ago now) and barely had a chance to practice since so I've lost a lot of it.

I'd like to get back into coding and was thinking about learning C#.

What would be the best ressource to get back up to speed?

I'm not really big on books for learning stuff, I'm more of a hands-on kind of guy with tutorial that get progressively more complex and with a lot of examples.

Any suggestions?

Thanks!
 

Koren

Member
map and filter don't return full lists in Python 3:
Probably worth a mention, indeed... Thanks.

Maybe I should have said that map(f, L) should be written
Code:
f(x) for x in L
without the brackets, because that's more correct, but, well... I thought that people that want map in Python probably don't know Python enough to grasp the details about generators.


The fact is... map COULDN'T return a list, it would have been stupid. Since Python is duck-typed, map second argument is an iterable, so the result can only be an iterable. Having a list would have been just wrong.

For example,
Code:
import string

def rot(c) :
	return c if c not in string.ascii_lowercase else chr((ord(c)-84)%26+97)

map(rot, "This is a secret message")
Why would the result be a list? It could be a string, but should you want something else than a string, that's time wasted. You may not even need to rot13 all the characters.

It make sense for map and filter to return generators. If you really want a list, there's no problem (even performance-wise) in doing
Code:
list(map(f, L))

By working on generators, map is far more usable. You can apply a function to a stream, for example
Code:
import sys

for line in map(lambda s : s.strip()[::-1], sys.stdin) :
    print(line)

It works perfectly fine (or map would have been even more useless)

This is also possible only because, by returning a generator, evaluation is delayed:
Code:
M = map(f, range(10**1000))

[x for x, _ in zip(M, range(5))]

The only tricky part with map is that you have to remember that if its second argument is modified, you have to deal with the result of the map before modifying the second argument, or be really, really, really cautious.

For example,
Code:
def f(n) :
    return 3*n+1 if n%2==1 else n//2

L = [ 123456 ]

for elem in map(f, L) :
    print(elem)
    L.append(elem)
usually create an infinite loop (the Syracuse series)

I said "usually", because I usually don't find clear things about this point in Python documentation... Sometimes it's "you shouldn't do this", sometimes it says it'll be infinite, but now how it's handled... I'm pretty sure you can't (and shouldn't) rely on any behavior, but it bother me a bit that this part is put under the carpet.
 

Somnid

Member
So I've learned some C++ a long time ago (almost 10 years ago now) and barely had a chance to practice since so I've lost a lot of it.

I'd like to get back into coding and was thinking about learning C#.

What would be the best ressource to get back up to speed?

I'm not really big on books for learning stuff, I'm more of a hands-on kind of guy with tutorial that get progressively more complex and with a lot of examples.

Any suggestions?

Thanks!

Honestly if you kinda know what's going on, you might as well start with a project you want to make and just look up stuff as you go. C# has some of the better documentation of all languages so easy enough to just start running. Tutorial-wise you could go through a Unity tutorial (note that what I've seen of it, it's not always best practice but it's fine for a novice).
 

Eridani

Member
god I hope learning to use github is worth it in the end.

Learning the basics of git is incredibly useful, since it's a great version control tool. It's also required in a lot of jobs. Learning the more advanced git commands? I hope I never have to do that, and it's not really needed for personal projects. I end up relying on this method more often than I'd like to admit though, so maybe I should just learn how to do things properly at one point.
 

Slo

Member
There's like a half a dozen git commands that you really need to be comfortable with and use daily. After that, use your Google Fu.
 

Koren

Member
Mercurial is, though :p

Bazaar too, it seems (although I've barely looked into it)


(SVN is kinda a special case, it wasn't designed for the same usage as Git/Hg... it's just a good replacement of the script mess that CVS was)
 

Pokemaniac

Member
Is it just me, or are the built in collections classes in the the C# standard library kind of awful? There just seems to be a general lack of options, and some of the ones which are there are bizarre (Like LinkedList not implementing IList). Compared to what Java offers it just seems a little lackluster.

I'm considering using C5 for collections so the code I'm writing can look a bit more sane. Anyone have any opinions on the library or any other recommendations?
 
Top Bottom