Jump to content

Jason

Free Account+
  • Posts

    2,345
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Jason

  1. I think that was the whole point, that the Arts staff at the New York Times were having a mixed reaction more similar to audiences than to the way Hollywood is reacting. (Edited, because people are freaking out over that word even though it's not the main point.)
  2. This is a bit difficult to parse for me. But essentially, in IRV, since your first, second, third choices etc. are only counted if all of the preceding (ie. higher) choices have already been eliminated, you can never help or harm your favoured choice based on how you rank your lower choices. Essentially what @The Panda described above, worded a little differently. Also, any ballot that has at least all but one of the choices ranked (e.g. 4/5 in a five nominee category) is guaranteed to be "counted" in IRV, since the final round of counting must have at least two choices remaining. So another advantage of IRV is not having to worry about whether you've voted for a nominee with a chance of winning. If you're really only interested in harming a movie you absolutely hate, then it is occasionally possible (under very specific circumstances) for the order of your higher choices to make a difference in getting it eliminated, but not in a way that is predictable unless you have detailed and fairly precise information about the full ballot preferences (ie. all of their preferences, in order) of the other voters. Since we don't have that information, any form of strategic voting is far more likely to harm a nominee you actually prefer than it is to hurt the one you hate. TL;DR: It's a very, very bad idea to submit anything other your honest preferences.
  3. So the same system AMPAS uses for picking the Best Picture winner - instant-runoff voting (IRV). Both of the systems he described are a form of preferential balloting.
  4. Wow, I just learned something new today. Australia wrote in a claim to New Zealand in their Constitution. (Or close enough to it, anyway.) And I thought that Canada had it rough, living in the shadow of the United States.
  5. I actually think @Telemachos really shouldn't stop. Because he's nice enough to not actually gloat when he's right (Moana ), and he's a great sport about being wrong. (about anything, not just animated - *cough* Blackhat club *cough*) If he stopped being cynical, it would deprive the rest of us of the fun every time an animated/family film does blow past his very low expectations.
  6. @AABATTERY Oh god, I just realized that the eOne distributing John Wick 2 in Australia/NZ is the Australian arm of a Canadian company, based in Toronto. I am both embarrassed, and so very sorry.
  7. You will soon wish your newest club could also slumber forever. Instead, it will be a glorious failure, March 17. That's right, we won't even have to wait for the weekend actuals. Or for TLJ to open this December.
  8. You should add me to your hit list as well. I didn't quite hate The Lego Movie but I didn't like it that much either, definitely less than most other people. It's possible that a lot of the humour went over my head, I'm absolutely terrible with pop-culture references. I do wish I saw it in theatres, everyone else laughing when I'm not is usually how I know I'm missing something.
  9. I bet Beauty and the Beast beats Frozen DOM (400.7M), 100 points, up to 4 people. (cc:@baumer?)
  10. I prefer the average rating as an indicator of average perceived quality by critics as well. The Tomatometer is just a yes/no vote, which means that a film that is perceived as significantly better on average could have a lower Tomatometer score because a few more critics happened to not like it - which is something that could essentially be the result of random chance. This is particularly a problem for films of generally high perceived quality. Yes, Metacritic has the advantage of all of its reviews providing a numerical score and presenting its data in the form of an average, but the sample sizes are indeed too small. Differences of around ~6 points or less are not meaningful (ie. safe to draw a broader conclusion from) for wide releases, with even larger gaps required for limited releases. Differences in the Rotten Tomatoes average rating are meaningful for wide releases when the difference exceeds 0.2-0.3 points.
  11. Yes, and more importantly (in my opinion) a 98% RT score is on the high side for an average rating of 7.7 (not quite the same thing). Here's a scatterplot of Tomatometer score against average rating for all animated films submitted for Oscar consideration from 2004-2015 (the original reason why I had the dataset), divided into groups based on year of release: There seems to be a trend in recent years for films with a given average rating to have a higher RT score than in years past. Even taking that into account, the expected range for a film with a final average rating of 7.7 would be from around 92% to 97% (from eyeballing the graph). However, I don't have a proper dataset comparing the average rating at this point (quarter of reviews in) to the final RT score, mostly because Rotten Tomatoes makes gathering such a dataset a pain in the ass. In theory, it's possible that this sort of disconnect at this point in reviews could suggest that the average rating is likely to rise, although from the ten or so films that I have looked at I don't think that's likely to be the case.
  12. The chance of it staying above 95% (ie. 96% or greater) is about 78%, assuming random sampling and a total number of reviews around 225. The fact that nearly a quarter of reviews have already dropped is a promising sign. Graph of the probability distribution looks like this: Note that the cumulative probability for any given RT% is the total of 2-3 bars on the graph.
  13. Only one of them is - TFA. The other two are Minions and BvS. BvS is the one I'm a little nervous about revealing (should be safe in this thread, right?), since there are people convinced that critics decided to pile on after the first set of not-quite-as-bad reviews. But it's actually more likely that WB was being selective in early screenings, most films (at least in my sample) drop a quarter of their reviews by about 2-3 days in advance, but for BvS that threshold wasn't reached until the day of release.
  14. Rating, or tomatometer score? No matter, I can do both. At only 20 reviews in, the 90% confidence interval for the RT score is enormous - 81% to 98%, assuming that the final number of reviews is around 200. I don't even want to see what the 95% C.I. looks like. There's about two-thirds chance the final RT score ends up being between 86% to 96%. If you're only interested in the lower limit, about an 85% chance the final rating is at least 86%. Graph of the probability distribution for the final RT score looks like this: Exercise caution in interpreting the P-values, there are about two bars shown for every percentage point. So the probability of exactly a 95% final rating is about 9%, not 4.5%. Only 13 of the 20 reviews so far have provided a rating. Based on those, final rating has two-thirds chance of falling between 7.1 and 7.8, or an 85% chance of exceeding 7.1. All of this assumes random sampling of course - the impression I get from examining about 25-ish films from the past few years (top ten of 2015 and 2016 plus a few others) is that any non-random bias is small for most films, but there's a minority of films that deviate well outside random sampling expectations. (Three out of those 25, one that is borderline)
  15. Welcome to the forums! I lurked for a bit too before I finally signed up - glad I did, it's fun to be a part of the conversation.
  16. Yes, the real problem is the OS total hasn't been updated either, since January 23.
  17. If it makes you feel any better, I'm in Toronto and I've seen it available but just never bothered to look it up because they're always reserved seating, and if I'm going to be planning that far ahead and buying premium tickets, it's going to be IMAX. Anyway, it's "a movie theatre seat that moves so you can move with the movie!". Sounds like a potentially nauseating gimmick to me, although I wouldn't mind hearing from anyone that's actually experienced it. Comes with this associated warning, too (spoilered for length):
  18. Arrival felt like it was pretending to say something profound about language, co-operation, and game theory, but it really wasn't. It's not that I disagree with the broad messages or themes, I just think the treatment was superficial at best, and fundamentally wrong in some of the details, especially regarding the Sapir-Whorf hypothesis. It wasn't saying anything about time at all, although I don't even know if it was pretending to. Best to not get me started on the "time is a flat circle" stuff. I know I'm probably taking it too literally. Also, this is probably a very personal problem, but I really didn't like the portrayal of Ian. No scientist I've ever met (and I've met a lot) would say anything as ignorant or arrogant as some of the lines he's given. To be clear, I think the film was very well-made, with beautiful visuals and a terrific performance from Amy Adams. So I didn't hate it. I was just very disappointed, compared with what I expected based on reviews etc.
  19. Who has the best chance of beating Coolio in the BOFFIES? I need to know now, they have my vote.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Guidelines. Feel free to read our Privacy Policy as well.