Please make only concise (150 word or less), meaningful comments. Posts that are devoid of meaningful contribution. The purpose is to explore and assess concepts, methods and critical analysis topics in a challenging yet collaborative
DB topic– How do you form good analyses from faulty, biased and unreliable data and methods?
In the previous discussion we dealt with algorithms, models, AI and machines generally, taking over many jobs in the future. Machines are great at some things, but not at things requiring mature judgment, contextual nuance, synthesis, and experiential finesse, may we say. Machines and models ARE TOOLS.
Strategy is done more in the qualitative realm of the latter areas of judgment, finesse, nuance and synthesis than in the realm of things machines do well.
Worse, strategy is about the FUTURE and both machines and most people are really bad at forecasting the future. My proof includes the failure of both machines and most humans (some got it) to forecast 1) the election of President Trump (not Mrs. Clinton), 2) the BREXIT vote, 3) the 2007 economic collapse, which was missed by about 40,000 models and a few thousand “experts” queried by the Wall Street Journal…among more other cases than I care to mention.
That insight takes us to this week’s DISCUSSION BOARD TOPIC =
HOW CAN YOU FORM GOOD ANALYSES WHEN YOU TYPICALLY HAVE BIASED AND LIMITED DATA (SOMEONE CREATED THE DATA FOR WHAT PURPOSE YOU USUALLY DO NOT KNOW) AND UNRELIABLE MODELS (WHICH WERE ALSO CREATED WITH BUILT IN BIASES AND FOR PURPOSES YOU MAY OR MAY NOT KNOW)? Talk about techniques you can use to form good analyses and hence good strategy in this kind of world.