Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Goodbye to Google Code Jam (withgoogle.com)
171 points by uptownfunk on March 25, 2023 | hide | past | favorite | 141 comments


For anyone wondering about the underlying reason for these events suddenly being cancelled: it is connected to Google laying off the team that organized these four events, as part of their January job cuts. I covered more details I could find a few weeks ago [1]

[1] https://blog.pragmaticengineer.com/google-coding-competition...


I appreciated the quote from your article that: "Based on information from insiders, Google’s coding competitions engaged more than 300,000 software engineers external to Google, annually. These coding competitions assisted in the hiring of thousands of software engineers each year, who were directly sourced from these events."

These views reflect that Google Code Jam was a very significant source for recruitment. In contrast, when I searched about whether Code Jam was a significant part of Google's recruitment strategy, one of the top results on Reddit on r/cscareerquestions really underplayed the recruitment part, by non-Google employees giving advice about it: https://www.reddit.com/r/cscareerquestions/comments/p7ioku/w...

The r/cscareerquestions commenters there could still have a point that it was more direct to take other approaches to applying to the company instead of Code Jam, but the general dismissive attitude of the top-upvoted commenter (e.g.: "No benefits. If anything, might even be harder to get interviews cause the guys grinding for those contests don't have time to make a proper resume.") really overemphasized an opinion based on speculation, instead of taking a more balanced view that recognized that Google Code Jam was run with a large motivation to recruit developers.


There's a difference between "does Code Jam help attract quality candidates to Google" and "does participating in Code Jam help a candidate in getting a job at Google". Presumably, Google cares about the former, and participants care about the latter. Both statements can be true or false to various extents regardless of each other.


> In contrast, when I searched about whether Code Jam was a significant part of Google's recruitment strategy, one of the top results on Reddit on r/cscareerquestions really underplayed the recruitment part

Google has a very standardized recruitment procedure. Once you're in the pipeline, you're judged with the same standards as everybody else (algo interview, system design...). Whether you're a Code Jam champion or not. Where Code Jam could help is to get contacted by a recruiter, but a simple referral could be enough for that, so it's not that hard to enter their recruitment pipeline.

Also, Code Jam problems are much harder than what you'd get in a coding interview, so it's not the best use of your preparation time to land a job at Google. Leetcode problems are more similar to what they ask.


It's there to find the non-networked developer - the kid in their mom's basement who is not in Silicon Valley or possibly not even in the U.S. and doesn't know anyone who works at Google, but learned to program on their own. Ironically this is the sort of developer that HN says can't get FAANG jobs because they lack the credentials and connections.


> Ironically this is the sort of developer that HN says can't get FAANG jobs because they lack the credentials and connections.

That's correct. As the reddit post noted, there is no benefit to coming to Google's attention via Code Jam. They'll give you an interview that way, but they'll also give an interview to anyone else who wants one.

And the approach you take to Code Jam is actually harmful to your performance in Google's hiring interviews, to the degree that the recruiter who contacts you based on your Code Jam record gives you a specific warning that Code Jam people tend to have a common set of problems (in terms of how their interviews are rated) and you shouldn't treat the interview as being similar to participating in Code Jam.


They don't give interviews to just anyone who applies. IIRC the first time I joined (2009, interviewed in late 2008) the stat was that ~2.3 million people had applied to Google, ~1000 had been hired that year, and your odds of getting in were like 2000:1. There were pretty well-known tricks for getting past the initial resume screen, like getting a referral, having an Ivy League degree, having a hot tech company on your resume, or coming through a recruiting avenue (eg. GSoC or TopCoder) that Google sponsored.

Maybe the interview:applicant ratio is better now that there's ~100K engineers rather than ~8K engineers, but a quick back of the envelope calculation indicates it's still likely not everyone that applies. On my team last year (before the hiring freeze) maybe ~20% of engineers do interviews regularly (a bit better than it was in ~2010), and we'd usually limit it to 1/week. That's ~20K interviewers * 50 interviews = 1M interviews/year, which is definitely better odds than in 2008, but still nowhere near enough to interview everyone who applies even if the number of applicants hasn't grown at all since 2008.


> the initial resume screen

At least as things stood last year, Google does the resume screen after the interview, not before.


"People, mostly children and very young adults, talking confidently about topics they have no idea about" can basically be Reddit's official slogan.


This is really sad. What’s even more sad is that the person who made this descision will probably never know how much damage it did to Google’s reputation. Code Jam had what I think were the most well designed and insightful problems in all of competitive programming, not to mention innovative problem types and judging systems.

Google, you have just disappointed and pissed off many of the best algorithmic programmers in the world. They will not forget this. Is that who you want to be?


I'm not sure it's a given that Google was actually getting any kind of reputational benefit from Code Jam. As an example, the last significant submission on HN discussing Code Jam was 8 years ago. It only came up in HN comments a handful times per year, and are not particularly positive (not negative either, just neutral). Can you quantify that reputational benefit somehow?

Why is running a competition really well for 15 years something to be pissed off about? Why not be dissapointed at Microsoft, Amazon and Apple for not running their own competitions, as well or better?


People are still going to apply to work in Google in droves. No need to be so melodramatic.


Couldn't this be confusing cause and effect? The article seems to insinuate that the product was cancelled because the team was laid off to save costs. But couldn't there be an unrelated reason why Google wanted to cancel the product, which then lead to the decision to lay off the team?

I could imagine that with the advent of the likes of GPT, Google may have concluded that the competition has lost its utility in identifying talented coders to recruit.


You are overthinking this - the team that managed this got laid off that’s there’s no more Ai story to it.


That is so bad. They could’ve left 1 or 2 for the benefit of CS as a whole. They are undercutting their own future talent pool this way.


One of the biggest questions when doing layoffs is whether you should kill entire teams or cut more teams by a small amount. A 1-2 person team is often very bad for people's careers and creates major sustainability problems when people take vacations or leave or whatever.

I am sad that codejam is gone. I'm not certain that a skeleton crew would have been good for the skeleton crew.


Unless they see Bard as fulfilling that.


Topcoder will also be having their final TCO this year: https://codeforces.com/blog/entry/113201

Google Code Jam and TCO were the two biggest onsite competitive programming event that's not targeted at students.

This is a huge blow to the sport.


To be honest, TCO being discontinued isn't that big a deal IMO. Maybe I'd feel different if I was a top competitor. However as someone middle of the road, competing is an atrocious experience.

The web portal is awful and barely works. The desktop client looks like it's from the 2000s and also barely works. To my knowledge the system had weird constraints about input size, and instead of reading from stdin like every other serious website they have the weird "write it in a class with this method signature" format.

Personally, the problem quality didn't compare to websites like Atcoder and most Codeforces contests, and i feel like most people (me included) don't enjoy the "hack phase" format. Maybe if i could bring myself to actually compete more often, my mind would change on these things.


Thankfully CodeForces is still going strong: https://www.codeforces.com/


It's a pity that nobody was able to reproduce the challenge phases of TopCoder SRMs.

Coding problems while taking of edge-cases for later was really fun.


FYI, CodeForces allows challenges too.

They even implemented it slightly better: instead of having a separate challenge phase, you can “lock” your solution for a particular problem at any time (which means you cannot resubmit it) and then view other people's solutions, and challenge them with custom input.


Wow topcoder too? What a shame. End of an era


they don't mention why it's ending?


GPT?


We just did a competitive coding event and one of us based his strategy on ChatGPT, which was allowed. That got him through the qualification rounds, but then ended in the middle of the pack, which I think matched his skills as a non-ChatGPT coder. So, not a significant advantage, and none of the top performers used ChatGPT.

The thing is, most of the times, ChatGPT gives a wrong answer, so you need to look at the code, understand it, point out the mistakes and have ChatGPT output the corrected code. Sometimes manual correction is required, and sometimes, rewording of the question is required. It won't do everything for you unless you are particularly lucky, you still need to be a competent coder to work with it.


Yeah, but with the breakneck speed at which this field is moving, how long will it be before we have dedicated 'Coding Competition' models (if they don't exist already somewhere)?

I can't help but reminded of AlphaGo. When Lee Sedol saw the footage of the AI beating the then European champion, he was like 'That's neat, but it would still take a decade of training to get to world champion level'. That was about half a year before he crushingly lost to it. We just underestimate how fast those models can evolve. I know that playing a well-defined game against itself is an ideal setting for RL models, and that language models use very different architectures. Nonetheless, I still think it's reasonable to think that coding competitions may soon go the way of chess,..., where humans stand no chance against the best machines.

Not that that's a reason to cancel them immediately like Google just did (after all, chess is arguably thriving), but that's a different story.



I miss Topcoder. Funny story I went back through all my old SRMs and found one where I was placed in the same room with none other than mzuckerberg. He even unsuccessfully challenged my code and I in turn challenged his with a test he hadn’t considered, and he lost all his points!

This was, of course, a couple of years before Facebook was even thought of.


Competitive coding has always been a bit of a joke, but it's a great entry vector for kids to get introduced to programming (and eventually software engineering). I'm not entirely sure how to feel about this, though. I have always been suspicious that the existence of the game of competitive coding has been used to kind of justify the "leetcodification" of the tech interview circuit, and (IMO) that can't go away soon enough.


Why is it a joke?

It is like any other sport or Olympiad, it involves solving faster, accurately and potentially of problems you haven't seen before. There is no direct correlation between competitive programming like CodeJam/ICPC and leetcode. They don't target the same audience at all. I couldn't get to the top of these but I do admire people who do, they are some of the absolutely brilliant people I have worked with. I have seen how they employ their rapid understanding, breaking down and debugging skills in real world programming as well.

Coming to this argument about even algo/DS interviews, could you suggest an algorithmic interview replacement, that is not time consuming, is objective, language agnostic and scales well with the size of a company? It is not like these companies are not having past experience, system design interviews. They do have them as well but they cover subset of what all you would want to check.

I would have expected to read more reasonable opinion on HN.


Depends what you’re hiring for.

Firstly the interview should be structured with an aim to maximize correlation between success on the interview and success on the job. If you’re not thinking along these lines you aren’t even playing the game.

For most startups, the right criteria to screen for is pace and quality of code, plus work ethic.

You can ask a relatively easy coding problem, whether it be algorithmic/data structure, or more real world, that 99% of people will be able to solve, but still apply a high bar for what’s considered passing. Can they code the obvious solution quickly and effortlessly? I dont care at all if you’re able to produce a solution, I care about the path to that solution.

Screening for the straight arbitrary algorithmic/design diagram problems has hordes of people who have trained to the test. At scale hiring these people may work through selecting for people with decent work ethic to grind problems, but you’re not going to get consistently great people this way. Lots of false negatives and positives.

Having hired over 100 engineers in the bay area, judging by coding competency first was always more effective than judging based on quick shot leetcode/algorithmic aptitude. You can easily grind canned algorithms to pass a leetcode style interview, but you can’t fake coding proficiency.

If you’re hiring for DB or ML developers, obviously theory is more important. But 90+% of you are not


I agree, and I’ve competed at the highest level in programming competitions. Most of my peers there would not be good hires.

Top level programming competitions are like top level spelling bees. You need to have as many algorithms memorized as possible in code. That’s right. You need to be able to recite code by memory to pump out algorithm functions as quickly as possible since libs aren’t allowed generally. This isn’t a useful skill.


So you've competed at the highest level of programming competitions, but you also seem to be doing work on Upwork [1]? That seems pretty unusual...

[1] https://news.ycombinator.com/item?id=34935440#34935783


I do not know the OP and I cannot prove or disprove any claims they make about their life, but I can indirectly attest to the following: I was competing around 2008-2010 at the regional level of the ICPC in Central Europe and indeed, our team's approach at the time had some memorization aspects as well. (Our university had a significant amount of support for the competitions, with some coaching as well as a course that consisted of weekly practice contests.)

We never won anything, so I would not dare claim we competed at a highest level. As far as I remember, most of our preparation was about "recognition" -- how to tell if a greedy approach is optimal, or how to recognize if a dynamic approach fits. And of course, how to write a program quickly and not forget any corner cases.

I remember having daydreams back then of memorizing a max-flow algorithm or potentially even a linear programming solver and then quickly retyping it at a competition. Flows and LPs indeed solve a lot of stuff (LPs are P-complete). I admit I never did that, and it wouldn't be a winning strategy there anyway.

PS: Oh, and contrary to the poster above, most of my friends from the university days would be and indeed were great hires, judging by their jobs at Google, Microsoft and elsewhere. Some others, such as the actual ICPC winners from our university, ended up pursuing academic careers -- but I dare not say they would have a bad time in the industry.


Central Europe and Russia take competitive programming far more seriously than the U.S., from what my professors told me. It took very little effort to take first in a university wide competition and join the team that represented the school and went on to win the region. I suspect most Central European teams would sweep the USA.


…yes? Lol. I’ve moonlighted on upwork for easy cash in the past. Not all of us made 200k/year in our first job out of college a decade and a half ago.

Yes, I was part of a USA team in an international competition. I won our region.

For what it’s worth, competitive programming isn’t like sports. The playing field isn’t that good, and the amount of work it takes to make it past regionals and to an international competition is waaaay less than any sport or even something like chess I suspect.


The field is definitely weak. I got a top 50 result in round 3 (but not top 25) of my first GCJ, my second programming contest ever, the first being the one time I showed up for one in college, and that score was ahead of some ICPC and IOI gold medalists. My only preparation being doing a few practice problems from prior years and making a blank C++ file with some includes, an empty main function, and "using namespace std;".

I think it helped to have some background in time-constrained thinking from high school math contests, but I was never particularly good at those, either.


Let's admit it (your username does), you're very good. Code Jam seems to reward problem solving and maths/CS ability far more than programming ability, relative to other coding competitions. The most important thing is analysing the problem, not memorising standard algorithms. That's why you could do so well with so little competition experience. And why I have stuffed up so many GCJ rounds.

That said, sometimes the field really is weak. I finished in the top 1000 on the first Distributed Code Jam and got a t-shirt just because... less than 1000 qualifying contestants were willing to even figure out the novel setup and compete. (Actually I did badly.)


Most people will not be able to solve longest common substring without being familiar with the problem ahead of time, and many programming problems can be broken down or transformed into similar algorithms.

Identifying root applicable algorithms and reproducing them quickly is what you study when competing in programming competitions.


Recently read a blog post of Erik Bernhardsson on the interviewing process(he has several other interesting posts that approaches hiring as a machine learning problem). https://erikbern.com/2018/05/02/interviewing-is-a-noisy-pred... He suggested asking many short questions instead of doing long leetcode problem(as the solution often relies on one single insight, it gives you very little information relative to the work involved). He also suggested using code reading

>I print out 10-15 code snippets (not just code, but also UNIX commands, regular expressions, SQL queries and many other things). I then go through and ask: what does this snippet do, how does it work, are there any bugs, etc.? Reading code (as opposed to writing) means I can cover a lot of ground extremely quickly, spending no more than a minute or two on each problem.


Yeah, asking questions with multiple stages/progression, where the early stages are obvious but still provide signal through coding competency are good.

Similarly, many small questions is going to give you a more accurate view than a single hard algorithm problem. There tends to be a trick or specific process to solve leetcode style problems, thus sessions are more black/white on whether the person makes that observation. Very unnatural to usual dev work.

Interviews should never feel “hard” even if your threshold for success is high. There’s nothing that requires you to ask hard questions to get good candidates.

By definition if your problems are perceived as “hard” then they aren’t assessing people on relevant skills, because a problem that is correlated with real world work shouldn’t be hard for anybody who is coding all day/every day.

(Though somebody who is unskilled may find pretty much any problem hard, speaking more broadly here)


I've had an interview during which I was asked to review code.

I loved it.


For startups I agree, they should definitely customize their interview rather than copying big tech pattern.

The kind of interview you describe is something Stripe also does, but I have seen even this resulting in non objective selection criterias. Like one Stripe employee told me that they prefer if someone uses a map rather than iterating on a list, as it is cleaner. While I agree with the sentiment; I don't think it is a good filter to add in the interview.


Can you elaborate on the distinction between coding competency and leetcoding? I agree with you that leetcode is terrible but I don’t quite follow how your suggestion is very different in practice.


Coding competency:

- Can they code quickly?

- Do they know common APIs offhand? (Not have to look up)

- Do they write code in a readable fashion?

- Do they consider tests?

- Do they think more deeply about code structure? Immutability, single source of truth, DRY, not overengineering

You can understand all of this by asking a very easy problem that is written in 50-100 LOC. I tend to ask them to implement a certain data structure, where the naive solution is obvious, but still requires ~50 LOC. Then there’s a better runtime solution so I can assess theory as well, but I dont judge primarily on that, or care too much if they figure it out.

You may be surprised how much signal you can get out of asking somebody to implement a class with a few APIs and basic loops/data structures.

The simple fact that people who don’t train specifically for leetcode, don’t do well at it is effectively proof that it’s weakly correlated with real world work results. Otherwise people who are effective in the real world would automatically do well with leetcode. Speaking more to the harder problems though, the easier problems are much more correlated


LeetCode problems are much simpler than the general coding contest, and they are only used as a basic competency tests.

It's like asking for the difference between an Olympic race and a blue-collar job requiring you to be fit enough to run a mile.


> There is no direct correlation between competitive programming like CodeJam/ICPC and leetcode

What is the difference that you see between these sorts of algorithm puzzle contests?

Is there some reason that people who are good at one contest would be bad at another, similar contest?


The main difference is the audience of both is different.

Leetcode works like this. Company A is asking these 100 questions with this frequency; let me solve these questions so that when I encounter these in the interview I can solve them then. The whole premise of Leetcode is on giving a question bank to what is already being asked. The aim is to clear the base bar a company has set.


It makes sense why someone who is good at LC via pattern recognition might not be as good at competitive programming. However, it’s not clear why the reverse would be true. Surely someone good at solving unencountered algorithmic problems would be good at LC?


You're right, it's true - if you're good at competitive coding you'll be good at LC. But that's incidental. You don't even need to be a Div 1 competitor on (say) Codeforces to be great at LC - the bar is incredibly low. Just being average is more than good enough at interviews. Serious competitive programmers aim to solve problems much harder than Leetcode. Just look at some older ICPC and Codejam problems.

There are some people who basically just do competitive programming to be good for interviews, but they don't get very far - not even to div 1. They reach whatever bar is necessary to clear an interview and quit immediately. The people who seriously compete to go for ICPC World Finals or to get far in CodeJam, they aren't really concerned with interviews.


That's right. I'm an okay-ish competitive programmer (around 2400 on Codeforces). I haven't played competitive programming for years (though being semi-active in the community, for problem setting etc) and last I tried I can still randomly draw a LeetCode hard, consistently solve in 10-15 minutes max.

The problem is LeetCode is SO BORING for anyone ever tried competitive programming so people usually won't do it at all.


It’s super weird seeing people who maybe aren’t into puzzles use pattern matching strategies they developed by practicing leetcode.


> problems you haven't seen before

Having seen similar problems really helps though.


Yes, it does. Just likes it helps in Maths or Physics Olympiad or any other ranked entrance examinations. Patterns are there in most real world problems as well, the more you have tackled before, the easier it is for one to see a pattern.

Just like any sport as well


> Coming to this argument about even algo/DS interviews, could you suggest an algorithmic interview replacement, that is not time consuming, is objective, language agnostic and scales well with the size of a company?

Why are you presuming an algorithmic interview? How many engineering jobs depend on being able to memorise algorithms?

If you mean suggesting an engineering interview: sure. Give someone a task and 40 minutes to build it. Leave them alone. Come back and go through the solution together.


I am not a big fan of them because the sport has converged on the idea that the competition should be about quickly producing code for problems that map to algorithms fairly trivially once you parse the structure of the problem. Top-level competitive programmers have libraries of functions and algorithms that they copy-paste from, and so the end result is more like a spelling bee than anything else.

Personally, I would be a much bigger fan of competitive programming if at least one of these were true:

1. The problems at high level programming competitions were harder and maybe mapped more to real-world programming problems. Think of the IMO or the Putnam exam in "competitive math." The "Olympiad" of programming today consists of 5-10 problems that top-level competitors can solve in ~10 minutes each by reading the problem and mapping it to a known algorithm. Speed of typing and information retrieval from your algorithm library is the primary skill here.

2. The scoring method was something other than speed of production of code. Performance of the code is an easy alternative, but also not a great one unless you want to force everyone to use C++ or Rust. Using an objective function of some sort (based on the problem) would be an interesting method, but harder to write questions for.

It is very easy to see the link between leetcode and competitive programming. The problem structure is basically the same, and the big difference is that the leetcode problems that people give in interviews are easier. The problem-solving skills involved are the same.

Also, I can't suggest an interviewing method for you that has all those properties, because one doesn't exist. You aren't going to find one interviewing method that works the same for a cloud provider, a CRUD SaaS startup, and a game studio. This isn't something you can just cargo cult, you have to design it for your own situation. My inclination would be to cut the 5 leetcode interviews from your circuit and put that time into a single 2-3 hour take-home project that has some relevance to your company plus a 1-hour code/design review interview. Alternatively, give a candidate a bunch of small code snippets and ask them to debug the code.

Finally, if you want an IQ test (which is what I suspect hiring based on "spelling bee" competitions like competitive programming and leetcode is really about), you can add a Wonderlic test (or one of their competitors) to your interview circuit - a lot of hedge funds do it, and I honestly didn't hate it when I interviewed with a few of them.


I think you have some delusions about competitive programming. Now lets debunk them.

If you looked at the Putnam results this year, a good amount in the top 50 were competitive programmers (let alone top 500 which was filled with names on Codeforces). This is just America and Canada which aren't the strongest countries in the world when it comes to competitive programming. There is a significant amount of medalists in IMO who won gold medals at IOI.

Now 1) is definitely true. In the last ICPC, no team could solve D (including the winning team consisting of an IMO gold medalist, despite 19 tries).

Here is another example of a tough problem. It was so difficult that the solution to this problem was a published math paper! https://codeforces.com/blog/entry/109707#comment-977862

Now to debunk copying and pasting library code. Take a look at this blog. https://codeforces.com/blog/entry/112021

If you think just using some library code is competitive programming that takes 10 minutes to solve and mapping to some well known algorithm you most likely have never done it to any serious extent. In that case, provide more informed opinions then saying it is the same as leetcode.


> Top-level competitive programmers have libraries of functions and algorithms that they copy-paste from, and so the end result is more like a spelling bee than anything else.

I don't think you've kept up with or watched any "top-level" competitive programmer solve problems. Saying all competitive programmers do is copy-paste is like saying all programmers do is google.


I think that if your job is to program quickly (which is the case for some subset of professional programmers), most of your job is to figure out what to Google, and then putting together your search results. That is roughly the same for competitive programming, at least for the times I have tried it and the people I have watched do it. About half the time is spent thinking about what to pull off the shelf, and the remaining half of the time is putting it inside your solution template (which you have pre-made to save time during the competition).


And just like any other sport it has virtually no usefulness in real life


Yeah who needs well-being, a healthy back or cardiovascular health anyway?


Ironic since competitive athletes are usually the ones with issues later in life from pushing their body too far, not to mention the mental stress.


You don’t need to compete for those benefits.


Some find motivation in competition.


I think one of you is defining ‘sport’ as in competitive sport and the other is defining it as exercise.


Just like sport coding competitions are fun :-)


> It is like any other sport

Do you consider competitive programming a sport?


Well, if we now consider videogames a form of sport called esports, I don't see why we can't do the same for competitive programming. But I think OP was referring more about the competitive side, regardless of whether it qualifies as a sport or not.


If playing Fortnite and sc2 are considered sports, why not?


> I have always been suspicious that the existence of the game of competitive coding has been used to kind of justify the "leetcodification" of the tech interview circuit, and (IMO) that can't go away soon enough.

Everyone says this and then their replacements are time-wasting take-homes or raw credentialism. It's a drag to get ready for interviews but I think the fact that one at least has the opportunity to play on a level playing field regardless of what names are already on their resume is one of the best things about this career.


On side micro engineering tests. With your own laptop and peace and quiet until the time is over.


Strongly disagree! Coding competitions are fun and educating. That's always been the only reason I've done them -- well, and also to get together with friends, and to have something to feel good about. Using them in recruiting is something entirely external to the sport (and yes, I know Codejam was categorised as "recruiting" by Google). I'm incredibly disappointed that Codejam is going away. I wanted more t-shirts.


This is quite the ignorant perspective. Competitive programming is much more than just for tech interviews. It's allowing ambitious people who care about algorithms and solving problems to learn, have fun, and improve their programming ability. I think it's healthy to facilitate such activities.


> a bit of a joke

You demeaning a mind sport just because something similar is used for interviews is the joke.


Competitive coding is completely unrelated to leetcode. You can't get good at competitive coding by doing leetcode, and you can't get good at leetcode by doing competitive coding.


It's funny how age of an event is used to justify that something should be stopped. So sunset Google Search too? It seems to be a Google thing to do: randomly start and discontinue things without much thought.


This is how much of the world works, sadly. This is why brands refresh their logos every 5-10 years, to avoid becoming the old, the stale, the has-been. It seems to me that for some reason, this effect particularly affects software developers, because we are the first to call finished, stable open-source libraries "abandoned" if someone doesn't make a pointless git commit every 3 months.


In general Online Rounds of coding competitions are no longer going to work. As GPT-4 had been shown to outperform humans on the coding competition tasks. You can have it assisted, but this quickly becomes a “who can afford more compute” competition.

RIP coding competitions.

Incidentally, World Finals: up to 2020, it was held in a location in the US or Europe, and has been online since. Pretty incredibly, Gennady Korotkevich has won the title every year since 2014 – except once.


>As GPT-4 had been shown to outperform humans on the coding competition tasks

On the same frontpage: https://news.ycombinator.com/item?id=35297067

Food for thought. Also by their own paper it has only a codeforces rank of 392 which is below 5th percentile.


Also a cute prompt I saw on the same twitter thread

```

This is a variant of the monty hall problem:

Suppose you're on a game show, and you're given the choice of three doors. IMPORTANT: All the doors are transparent. Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who clearly knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

```

The response from chatGPT (v3, non plus) is:

```

Yes, it is to your advantage to switch your choice.

In this variant of the Monty Hall problem, the fact that the doors are transparent does not change the underlying probability of the problem.

```

I wonder how gpt-4 does. There does seem to be a bit of overfitting, but I consider the prompt fair enough and even called out the trick in capital letters, so that even a tired human should be able to get the answer.

Edit: An even more obvious variant of the prompt is seen in https://news.ycombinator.com/item?id=35192466, which goes further and spells out that the contestant explicitly picks the door with the car. ChatGPT still gets it wrong.


GPT 4 gave me this answer:

In this variant of the Monty Hall problem, since all the doors are transparent, you can clearly see what is behind each door. This changes the nature of the problem entirely. The element of uncertainty, which is present in the original Monty Hall problem, is no longer a factor.

In this scenario, if you pick door No. 1 and see a car behind it, you should stick with your choice, as there is no advantage to switching. If you pick door No. 1 and see a goat behind it, you should switch to door No. 2, as you can clearly see the car behind it.

Since you can see what's behind the doors, the probability of winning the car is no longer based on conditional probability, and the original Monty Hall paradox does not apply. Instead, your decision is simply based on your observation of what's behind each door.


Somehow, this would be one of the most impressive things I've read about GPT-4. It's really difficult to argue that it has well-founded understanding of the question, assuming, of course, that this wasn't actually in its training set.

And I see someone DID ask GPT-3.5-based ChatGPT the same question at least a month ago [1], so OpenAI certainly has it on record. That's long enough ago that it could well have been used to fine-tune GPT-4.

[1] https://www.reddit.com/r/bing/comments/117hj14/bing_chat_can...


GPT-4 responses.

-- First variant --

Since all the doors are transparent in this variant of the Monty Hall problem, you can already see what is behind each door when making your initial choice. Therefore, there is no need to rely on probability or switching strategies to maximize your chances of winning the car.

In this case, you simply choose the door with the car visible behind it. The host's action of opening another door and offering you the chance to switch doesn't provide any new information or change the odds, as you can already see the contents of each door.

-- Explicit variant --

In this particular case, since the doors are transparent and you can see that the car is behind door number one, you should stick with your original choice. There is no need to switch doors as the probability of winning the car by staying with door number one is 100%. The Monty Hall problem typically assumes that the doors are not transparent and that the contents behind them are unknown to the contestant.


> I wonder how gpt-4 does.

The problem is that as soon as people started tricking ChatGPT 3 into problems like that, the correct answers are now being used to train the next versions and are going to be part of the dataset.

So GPT-4 or GPT-5 may get the answer right, but that still wouldn't mean anything.


Not the case for GPT-4 though, as it’s knowledge cutoff is the same as GPT-3s, that’s why it’s easy to compare the two on the same problems and see what’s the difference.


Not so fast!

Yes, the GPT-4 paper says

> GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its pre-training data cuts off in September 2021 [footnote: The pre-training and post-training data contain a small amount of more recent data], and does not learn from its experience.

But note the more recent data. We know that InstructGPT (GPT-3.5) was RL trained on examples of previous queries to GPT3, such as those trick questions. We could assume everything (after filtering e.g. for benchmark contamination) ever sent to OpenAI is in that post-training set. This is indeed a very small amount of data compared to the trillion-plus tokens of older data it was surely trained on. We also know that when ARC did their evaluations of GPT-4, OpenAI hadn't finished fine-tuning yet, so they've certainly been continuing to do so recently.

See also my other comment https://news.ycombinator.com/item?id=35300668


I suspect you're right. Part of the supervised "learning" is hard-coding answers to gotchas posted on Twitter.


Fairly certain that is not what is going on here. GPT-4 seems genuinely better at reasoning and harder to trick from my testing.


I've read your prompt several times now and still don't understand. It seems intentionally crafted to be confusing with messy punctuation. I get lost before finishing the paragraph every time.

Just a couple of years ago anything else than it responding "I don't understand" would be science-fiction and now we are surprised it's answering incorrectly on something even humans have a hard time to parse.


The point of the prompt is that in the classic game you don’t know what’s behind the doors.

But in the variant of the game used in the prompt, the doors are transparent.

So in this variant of the game you already know what is behind all of the doors, meaning that you will already have been able to choose the right door, and also meaning that “revealing” what is behind one of the doors does not change the probability of what is behind the other doors.


Chess already has that problem and is still thriving. Something else is going on here.


I’m sure it outperforms the general population since most people can’t code a hello world, or regurgitate an answer to a problem they have been trained to answer but have no understanding of like ChatGPT can. But if a minimally competent human gave me a completely nonsensical answer to a question they haven’t seen before, the way ChatGPT does so confidently, I would think one of us had a stroke.

I would expect a journalist who never had do think through a theory of computation course to make breathless claims that ChatGPT can “solve programming problems“, but I’m pretty surprised to hear so many people who have jobs in tech repeating these claims, especially since all it would take them to trip up ChatGPT is a few seconds to type in a slightly unfamiliar or non-trivial question. It’s like a kind of second-order Turing test: if you think ChatGPT can program, you’re not a real programmer.


In related news, all chess and Go competitions have been shutdown permanently.


GPT-4 is competely incapable of solving any advanced problem in coding or mathematics competitions, and usually doesn't even appear to correctly understand the problem statement (assuming the solutions are not in the training set, of course).

Just try submitting the IOI 2022 and IMO 2022 problems.

Obviously it still outperforms the average human, since the average human has no knowledge of mathematics or computer science whatsoever.


> GPT-4 had been shown to outperform humans on the coding competition tasks

citation?


Probably like most GPT-4 performance claims, the non-peer-reviewed scientifically-formatted paper with opaque methodology published as PR for GPT-4.


[flagged]


Sometimes things that have been discussed as possibilities on this forum begin to be repeated and taken as fact. One might say that this forum has hallucinated the capabilities of this bot beyond its actual capacity, and the hallucination grows daily. A citation is indeed needed.


I haven’t seen a clear answer on this actually - there was some confusion if the training data contained solutions for older problems, and if it underperforms on previously unseen ones. Plus it still seemed to underperform on medium and hard problems? Does anyone in the know have a summary on this?


I think it has only been shown that ChatGPT can solve easy/some medium leetcode style questions. I doubt it can solve difficult problems that would give human competitive programmers a hard time (for now anyway...)


Oh, I looked and didn't find anything. I thought about writing something about how I thought it wasn't true but decided it would be better to just ask if they had a reference for it. also, it's an extraordinary claim, so pretending like it's something everybody already knows is pretty weird.


GPT4 has been out for what, weeks? We aren't talking established theorems here.


chess has had engines 100x better than magnus carlsen for years and it's not dead. people who have fun giving these competitions will continue having fun, while people who don't will keep crying that they're useless or dead or whatever.


That requires a clear ruleset.

In chess it's very clear where to draw the line: no help allowed, you're on your own with your brain.

Where should one draw the line in Coding competitions? No ChatGPT? I guess most people would agree. No Copilot? Same as ChatGPT as the products seem to converge. No Googling? That too will converge to something close to ChatGPT.

If you can't look up information during a contest anymore, it comes down to memorization instead of problem solving. I am afraid the concept of coding concepts is dead indeed.


There have, of course, already been coding competitions that didn't allow any sort of reference, ones that only allowed reference you brought with you, etc.

And, honestly, plenty of people do have enough memorized to do this stuff.


In-person competitions simply don't allow any Internet access, you have to use their own machines and you can't bring any data or printed material or electronic device.


That's not the experience i had at past events. Everyone brought their own devices and solved the problems with whatever tool they deemed suitable.

Actually I'm attending another such event in a few days and expect to see a lot of ChatGPT sessions used with varying levels of success.


> GPT-4 had been shown to outperform humans on the coding competition tasks

This is not true as of March 2023 (hard to prove a negative of course but look at the percentiles in the GPT4 report)


>>After nearly 20 years of Code Jam, it's time to say goodbye. While Code Jam will not continue as planned, we invite you to participate in our Competitions' Farewell Round on Saturday, April 15 at 14:00 UTC.


Appears answer is staff responsible were laid off:

https://newsletter.pragmaticengineer.com/p/why-did-google-cl...


Bummer! Had a lot of reach and great tool to encourage future engineers. These events make it seem like the era of big fancy cool tech companies is over. Of course you have the emergence of other tech companies like open AI.


Wow. I have to agree that knowing the reason for this tarnishes my image of Google by bringing home the reality of the layoffs.


The Google Code Jam and other programming competitions motivated me and an entire generation to focus on programming as a fun pursuit and a healthy side of competition. While I'm very sad that it's gone, I'm very thankful to the Code Jam team for these 20 years of work put into competitions and in inspiring me into coding for fun.

If anybody here had any teenage students who are mildly interested in programming, I'd really recommend getting them into CodeForces or any of the few remaining competitions. They are more accessible than coding olympiads like IOI, and it's the perfect way to develop a programming muscle while being in contact with a community of peers.


Well, they announced they're shuttering it. Oddly, there is zero information as to why.


Supposedly the recent reduction in employees mostly targeted non-profit areas and several or the managers in charge of this were fired. I guess the befit nowadays after 20 years didn't justify the cost. I doubt you can put a positive spin on it so no reason to go into details?


It was probably also primarily valuable as a recruiting tool, and if they're not recruiting as heavily it no longer makes as much sense to invest in it.


The age of men is over. The time of the bot has come.


(10000 years later) The age of the bot is over. The time of the elves has come.


We must keep going forward in time until a backwards time machine is invented.


It's being replaced by a ChatGPT-controlled competition.


More like withoutgoogle.com


Are nowadays good hackathons a good way to train for professional development? I used to compete in the early 90s in computing olympiads but now find that the work coding problems are more about integration of existing stuff that solving difficult problems. This is a generalization and talking about what is normal.


I guess Google really isn't the company it used to be.


Oh yes, while everything and everyone is exactly same as they were 20 years back.


Most technical people at megacorps these days are incentivized to show how quantifiably influential their work was, even if it's 110% BS.

Meanwhile, apps get shittier, open source contributions are bloated or useless, and generally there's no incentive to refactor or fix anything unless it's broken today.. and then to only fix it however without considering reducing the mountain of tech debt.


While I participated in the Google Code Jam, my favourite was Hash Code - the team programming competition. The problems were more open ended and being able to work with a team was really great fun. Also, much more resembling a real-world practical scenario.

Sad to hear that Google is closing them down, really feels like an end of an era.


Is anyone going to export the problems and solutions from archive code jams onto github?


Many students in my college got their first taste of programming from competitions like CodeJam, Facebook's Hacker Cup et al. Sad to see them going away.

Though it did inspire a new generation of programmers and for that long live Code Jam!


Looks like the google hashcode is gone as well. Sad.


Google is investing in Starlink and Space-X so maybe they are pivoting to a new kind of jam.


I wonder if they’ll keep the site up or the old problems around. Some of them were great.


Do they have an archive of their challenges?


cause there is already lot of competitive coders that wont even getting jobs.


I can't help but I have the feeling Google is burning money at an unprecedented rate.


why not just go look at the public financials?


There is no need for coding competitions anymore. The competition is Leetcode. The prize: a high paying job.


1. Build an AI that beats all humans in leetcode

2. Take all high paying jobs

3. Profit


Indeed.

It is a product of decades of quantitative easing, near zero interest rates and a tech bubble that needed to collapse.

Thanks to GPT-4 and other LLMs, this have driven the need for coding competitions and interview tests like Hackerrank, Leetcode, Codility to zero and having almost no use since everyone will just Ch(e)atGPT the coding interview or competition.


Sounds really weird because you still have to screen people for software engineering jobs, while administering problems in an in-person interview is how it was being always done before coding competition platforms.

As to the platforms themselves, they are still usable for in-person interviews and practice.


>almost no use since everyone will just Ch(e)atGPT

pretty easily solved with covid-era online exam surveillance, i.e. camera on, screen shared and live explanation of your solution. But honestly not even that is necessary because cheating on online tests isn't new and companies are aware of it. ChatGPT didn't invent someone else taking a test for you.


CheatGPT. Love the pun :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: