Computer Science Homework Help

Inferential Stats in Decision Making Discussion Questions

 

  • Will require 2 peer replies after you answer this below question
  • Describe the difference between statistical significance and practical significance.
  • What assumptions are necessary to perform a large sample test for the difference between two populations means?
  • Ask an interesting, thoughtful question pertaining to the topic
  • Answer a question (in detail) posted by another student or the instructor
  • Provide extensive additional information on the topic
  • Explain, define, or analyze the topic in detail
  • Share an applicable personal experience

1)Describe the difference between statistical significance and practical significance.

While statistical significance relates to whether an effect exists, practical significance refers to the magnitude of the effect. The hypothesis testing procedure determines whether the sample results that you obtain are likely if you assume the null hypothesis is correct for the population. If the results are sufficiently improbable under that assumption, then you can reject the null hypothesis and conclude that an effect exists. In other words, the strength of the evidence in your sample has passed your defined threshold of the significance level (alpha). Your results are statistically significant.

What assumptions are necessary to perform a large sample test for the difference between two populations means?

  • A t-test is a statistic method used to determine if there is a significant difference between the means of two groups based on a sample of data.
  • The test relies on a set of assumptions for it to be interpreted properly and with validity.
  • Among these assumptions, the data must be randomly sampled from the population of interest and the data variables must follow a normal distribution.

References:

Spatz, C. (2019). Exploring statistics: Tales of distributions. Outcrop Publishers.

2) Statistical methods are often used to test an hypothesis based on data from a study or experiment. When you test an hypothesis, there are two types of possible errors. Type I is the failure to reject an invalid hypothesis; Type II is the rejection of a valid hypothesis. (It is never possible to prove an hypothesis, one can only reject or fail to reject, based on the data.) The probability of a Type I error is called the significance level. This is one kind of statistical significance.We can also test the probability of an event. For example, we can calculate the probability of someone guessing the picture on a card. If the subject guesses more accurately than random chance would predict, that is considered statistically significant. In other words the result cannot be explained by chance alone.Practical significance indicates a result that is useful in the real world. For example, a drug might consistently perform slightly better than placebo, and this result might be statistically significant, but the drug might not actually have practical significance because the difference between the drug and placebo might be far too small, and therefore the result is not useful in the real world (Rogers et al., 1993). It does not have practical significance.

If you mean exactly what you say, that’s unlikely. But all that means is that you won’t be able to reject whatever hypothesis you have in mind.Ideally, the test you use should not depend on the data. You decide what hypothesis you will test before looking at the data. And the testing method should be decided in advance.In practice this ideal is tempered by the fact that tests are usually approximate. If you recognise from the data that the assumptions the test is based on are faulty, you might change to more robust methods.If you mean that the parameters are exactly equal in the two populations from which the data is sampled, that’s also unlikely. But if it’s true you won’t know that. That’s what the test is supposed to check.If there really is no difference and you test at the 5% level, then the probability is 5% that you will reject equality. If the difference is small but non-zero that probability is slightly more than 5%.

References:

Rogers, J. L., Howard, K. I., & Vessey, J. T. (1993). Using significance tests to evaluate equivalence between two experimental groups. Psychological bulletin, 113(3), 553.