The results of the three surveys do correlate with each other. For all three surveys, it determined that I am a Liberal but not too extreme. However, all three of them were a little different. The one that Mr. Lynch has gave us, I was neither conservative or liberal, but I was leaning a little towards liberal. The second survey I did from the links, I was moderate liberal. Finally for the third survey, the result stated that I'm a very liberal. I believe the only reason all three results vary because of the amount of questions and kind of questions they gave. For example, first survey had 40 questions, second survey had only 10 questions, and the last one had 20 question. In addition, not all surveys had the same answer choice. One had 6 choices, another had 5 choices, and the other one had only 3 choices to choose from. Was I surprised by the results? I honestly don't know because I have no …show more content…
I strongly agree with all of the statements that I stated above. If the surveys had only been composed of those questions, the results have been the same to me. Honestly I don't know why but these statement seems like common sense to me to strongly
To take this research a little bit further, I talked to ten of the people who I know personally that took the survey. I had them compare the correct answers and their answers and asked them why they
_____ Referring to Question #10 above, which of the following best describes why you might be cautious in relying on these results? (A) The sample size is too small to make any reliable inference about the entire population. (B) Silly questions sometimes generate silly responses, not true opinions. (C) The respondents may not be a representative sample of any population of interest. (D) Newspapers tend to skew results to fit their own agenda.
When using a survey, there will likely be a greater number of responses if the survey consists of questions with multiple choice answers. While this does allow the respondent to give their opinion, it does not give them the opportunity to provide more details or expand on why they have that opinion. When the person being interview in this article, Jack Thompson, who is a critic of video game violence, was asked what percentage of games are violent, he was able to comment on how this is not a relevant question because it is more important to know how many actual games are being sold that contain violent
I do feel that there were a few good questions that I could relate to and found them to be interesting. I did take me about 45 minutes to finish the survey, mostly because of some of the questions that were difficult to answer. However,
4. What evidence have you offered to support your claim/position? Have you included your survey results?
There are multiple types of survey methods utilized in research. Three of the main survey methods include interviews, electronic surveys, and written surveys. In interviews, the assessor works directly with the respondent to obtain responses to the survey. Utilizing interviews allow researchers to obtain comprehensive and thorough responses. Moreover, the interviewer is able to follow-up on certain responses if more information is needed. Despite these benefits, interviews have some potential disadvantages. It is difficult to obtain a representative sample of a population willing to participate in an interview. Additionally, this type of survey is often costly in time and money. Furthermore, there is potential for interviewer bias to affect
With the 2016 Presidential Election approaching pollsters have been conducting several polls. The poll industry has recently made two high profile mistakes–the 2014 election and a mishap involving 2016 presidential candidates Clinton and Sanders. Even the most prestigious polling companies predicted the wrong outcomes. This has led to a nationwide discussion about the reliability of polls. The reliability of polls has been on a downward spiral for years. Polls have become far less reliable due to an increase in cellphones, internet based polls, annoying telemarketers, and the technique of “weighting” polls.
The participants were generally briefed about the information obtained through the questionnaire, but they were informed to read the surveys and answer the questions on their own. Participants answered questions on a fixed scale as well as numerous (find specific number) demographic questions and one open-ended question. The participants were informed that their results would remain anonymous, and they were asked to seal their results in an envelope themselves following the completion of the questionnaires.
According to the test, I go the most common result which was a “solid liberal”. I am a careful citizen and believe that we should take care of our country’s security, but I am also very "liberal" in many categories, I consider myself a Democrat, but don’t completely agree with Barack Obama. Barack Obama was a good president,
In my results, my political compass was -2.63 for economic and -4.56 for social. This means that I am a slightly left libertarian. After reading about the views of each side, I think this test placed me into the correct area. I agree with the results of this test, because I believe that I am not an extreme liberal or leftist, but still agree with more of their ideas than a conservative or rightist.
1. The polls have found vary widely in their findings (and their reliability), but I absolutely agree that there is not a consensus on these issues and am willing to accept a division into thirds as you suggest.
## This shows signs of being incomplete... look at the end ##### Public Opinion Survey analysis
The last question I used in my analysis was “Overall, how confident are you about your ability to take good care of your health? There were 7 responses that could be used, the first answer was completely confident, the second was very confident, the third answer was somewhat confident, the fourth answer was a little confident. The fifth answer was not confident at all and the last two answers was refused or don’t know. In the survey there were a total of 7,550 participants 22% had chosen completely confident, while around half 49.1% had answered it as very confident. Only around 23.9% had chosen somewhat confident and 3.5% had little confidence and 1.3% had no confidence at all, the final two were refused or don’t know and they received a total of less than .3%. While looking at this question is seems like around 95% of the participants had at least average confidence or higher in their ability to take care of themselves. The data collection used in this survey was by phone and mail.
In survey method research, participants answer questions administered through interviews or questionnaires. After participants answer the questions, researchers describe the responses given. In order for the survey to be both reliable and valid it is important that the questions are constructed properly. Questions should be written so they are clear and easy to comprehend.
De Vaus (2002) mentions that the value of surveys is in “that we collect information about the same variables or characteristics… and end up with a data grid… Since the same information is collected from each case the cases are directly comparable” (pg3). The technique used by the UNDP/GWP (2005) group did not address the same questions to every participant. Thus the extracted data is not strictly comparable. This also means the number of participants, 7515 or 0.05% of 15 million population, is misleading since not every participant answered each question, and the raw data did not include how many times each question was asked (UNDP/GWP, 2005). A study is only reliable if it is possible to replicate the evidence for the same situation (Simons, 2014, pg 76).