Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Myyrä last won the day on January 13 2018

Myyrä had the most liked content!

Community Reputation

2,549 Excellent Walrus


About Myyrä

  • Rank
    Rules Lawyer

Profile Information

  • Gender
    Not Telling
  • Location
    Espoo, Finland

Recent Profile Visitors

2,746 profile views
  1. Guardian brings all the movement your crew could possibly want.
  2. There is also the questionaire that also has fields for naming underperforming models and providing reasons for why they are underperforming. Wyrd hasn't actually stated their preferred format of feedback for this open beta. Just that we should use the forms to submit it.
  3. I don't see why playtesting shouldn't be science. Science (from the Latin word scientia, meaning "knowledge") is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe (or a miniature game). I'm all for backing one's claims with evidence. However, I reject the notion that battle reports are the only evidence worth considering. Real life experiences of any single individual become more or less meaningless when there is stochasticity involved. If I roll a normal six sided die 10 times and get 2 ones, 4 twos, 1 three, 2 fours and 1 five, is the die good or bad? There's just no way to tell for sure. Ten also happens to be the number of actions most models get to make in a Malifaux game (assuming they don't get murdered horribly). It is really difficult to say whether the model is good or bad based on the outcomes of those actions alone. However, if we know for sure beforehand that the die will absolutely produce every number from 1 to 6 with an equal equal probability, we don't really have to throw the die to know whether it is good or bad, do we? That is the pretty much comparable to the amount of information we have available based on the rules of the models and rules of the game. It is possible, and in fact fairly easy, to just calculate how much damage a model is expected to do when it attacks another under specific circumstances or what is the probability distribution for the number of attacks it takes to kill that other model. The point I'm trying to make here is, that it really grinds my gears when people automatically demand battle reports as evidence for any and all claims. Proving most claims using battle reports is unncessary, time consuming and incredibly difficult. It's usually easier to provide better evidence by other means. Keep demanding evidence, but stop demanding unnecessary battle reports.
  4. Myyrä

    Pray for Abuela

    To me, Abuela doesn't seem bad for 5ss model. Sure, she will be spending most of her time focusing and using Listen up young one, but I don't see that as a big probelm. The bigger problem to me seems to be that there aren't good targets for her to command, because most Family models seem really lackluster.
  5. I feel like there is a bit of a contradiction here. You say that every bit of data is important, but the amount of data will inevitably be insufficient for data analysis. Wouldn't that lead to the logical conclusion, that other ways of estimating the power levels of the models should also be considered?
  6. I wonder how much it actually matters what I learn about the models. I would still only have one datapoint, and everyone else would just see that it is possible to win with the flaming garbage dump. Would that actually help anyone else to accurately gauge the power level of anything? Sure, I could also include my impressions about the game, and try to argue that the flaming garbage dump is just that, but why should anyone take my impressions after one game any more seriously than my impressions that are based just on reading the rules? There are a million reasons why something might seem like a flaming garbage dump in a game, when it actually isn't one. Some of the possible reasons include: ignoring some rules of the game, ignoring some rules of the model, not using the model in an optimal manner, not bringing any important synergetic models, bad scenario for the model, bad playing, bad luck, inability to accurately estimate the skill level of your opponent relative to yours... The list goes on. Having seem many playtesting reports over the years, they seem to be suffering from at least some of these issues more often than not. They do not really invalidate the game as a statistical data point (as long as there multiple people testing all the stuff), but they definitely do raise questions about the validity of the conclusions made by the players themselves.
  7. I suspect most people who participate in the playtest will do it mostly for fun. That means they will focus mostly on models they find cool, interesting or powerful. That's what I want to do as well, because realistically speaking, I won't be playing and reporting dozens of games. I don't have unlimited amounts of time for playing Malifaux and neither do my friends. I would really appreciate if you dedicated your time to testing the underpowered models, since that presumably is the right thing for you to do, because you feel at liberty to write comments like that. You do not seem to understand the amount of statistical data that would be needed from the playtests to draw mathematically sound conclusions about the power levels of all the models in the game. It is literally impossible to balance the game based on playtesting alone.
  8. I was playing Lucius at his worst (after Malifaux child errata, before wave 3) against Mei Feng, in a killy scenario, and my opponent's deck was on fire.
  9. So if someone wants to claim that a model is obviously shitty, they have to play multiple games with said shitty model. That sounds absolutely awful. I'm not getting paid enough for playtesting to actually do something I don't enjoy. It's not like I can even cut down the workload by taking all the shitty models in a single crew, because then no one will know which of the shitty models caused the game to go to shit. And what happens if I should actually win a game with the hot pile of garbage, because I played against the guy, who now has about 0/1/100 W/D/L ratio against me? (There actually is a guy like that.) Is the pile of garbage suddenly viable?
  10. What you are asking is essentially impossible. No one can demonstrate conclusively with playtesting that a model is not good enough. How would you even go about doing that?
  11. It seriously took them that long to bring him in line? Oh, wow...
  12. Both theorycrafting and testing can be done with very different levels of expertise and effort. I have also seen numerous game reports that got some important rules wrong, didn't include key synergies for models and used models very inefficiently. Not all theoretical analysis needs to be based on mathematics, but mine often is, at least partially.
  13. It's very easy to estimate or simulate Nino's combat usefulness on different tables. If the terrain is open enough, Nino will be attacking something every turn until killed, and if it isn't, he will be doing nothing. Humans have a built in ability to simulate events they haven't experienced. It's called imagination. You don't have to make and taste liver ice cream to figure out, it's probably not a good idea. Not all simulation needs to be done with computers, and the imagination of an experienced gamer gives probably more accurate estimates about model's usefulness than a playtest by a complete beginner.
  14. It's not that difficult. Most of the synergies are just stat changes or positive flips or something, and it's very easy to simulate those. Simulating the movement of models is not awfully useful, but it could definitely be done. The reason it isn't, is that you would just find out that a faster model is better (who would have guessed). Assuming the models are about equally divided into 7 factions, that's about 500 000 000 000 crew combinations, give or take a power of 10. Would take about 60 million years of malifaux testing to test each of them only once.
  15. Well said. I would also like to add, that there are also hundreds of models to test, and most of them won't appear in that many battle reports. It may be because they are not very popular, don't have easily available models or because the rules don't seem that interesting. Getting an accurate estimate of a model's power level based on battle reports alone would take at least dozens of reports. That is just a mathematical fact. Theorycrafting, or as it is called in real world, modeling and simulation let's us test the rules in a much more time efficient way. It's not just a yelling competition without any foundation in reality. Many theorycrafters use actual mathematics as foundation for their theories. Miniature wargames are also extremely easy to simulate because the rules and interactions are very transparent and the underlying probabilities readily available. It's much easier than modeling real warfare, and even that is being done quite successfully. While one should be careful about drawing definite conclusions based on theoretical analysis alone, it is an extremely powerful tool for finding the potentially under or overperforming models that deserve more playtesting attention.
  • Create New...