Predicting Moral Uncertainty in the Era of Social Algorithms
- Christoph Nguyen
- Sep 11, 2024
- 15 min read
Updated: Apr 1
September, 2024 | Christopher Nguyen
Contents
Summary
The study of moral uncertainty is a relatively unexplored area in the age of social algorithms. As algorithms become more capable in aiding human decision-making, ethical concerns arise. We have already witnessed social networks' influence on politics and social issues, some even impacting presidential elections. This emphasizes the urgency for building safe and responsible technology, which requires understanding how humans think about ethics and morality. In this paper we introduce factors predicting moral uncertainty, focusing on technological optimism, confidence in government, and trust in others. Leveraging data from the World Value Survey (n=2,412 Americans), we conduct ordinary least squares (OLS) and ordered logistic regression analyses. Both regression results reveal that having higher positive views on technology and higher social outgroup trust slightly decreases moral uncertainty. However, when segmenting between Americans by social media usage, our findings show that daily social media users exhibit higher levels of moral uncertainty. Additionally, there is no evidence to indicate a relationship between confidence in government and moral uncertainty. These results suggest that promoting technological optimism and nurturing outgroup trust may help people navigate ethical dilemmas, highlighting areas for future research.
Introduction
Moral uncertainty is not knowing which ethical rules to follow. It is often referred to as the problem of decision-making in regards to uncertainty about fundamental moral principles (Tarsney et al., 2014). Ted Lockhart phrases the term as "What shall I do when I am uncertain what I morally ought to do?" - a question that has no straightforward answer. We posit that studying morality is complex given the polarizing landscape of cultures, society and individual differences. Our aim is to not define what is morally right or wrong, but rather to understand the factors contributing to moral uncertainty.
We utilize the World Value Survey, a global research project that measures human values as our dataset. Researchers from various disciplines use the World Value Survey as it has shown to reveal changes in what people want out of life and what they believe in over time (WVS). Additionally, we evaluate moral uncertainty in three dimensions, views on technology, confidence in government, and outgroup trust with control variables of age, education, social media usage, religious service attendance, and U.S. geographic regions.
Predictor Variables
Views on Technology: Views on technology measures attitudes towards the impact of science and technology on society. It was selected for analysis due to its relevance to several key aspects of altering moral values. According to a public opinion poll, “56% of experts agreed that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making (Pew Research, 2023)." These experts suggest this is because humans will value convenience over mental work and will allow "black box" systems to make decisions for them. For example, social media has already transformed society by using algorithms to recommend content that it thinks the user wants. Similarly large language models are automating much of human thinking, which poses benefits as well as threats to human morality. This raises the question in whether information provided by large language models (LLM) will help the human race converge onto a more concrete unified moral theory or create further separation of beliefs. Furthermore, science and technology are closely tied to new data that can influence our moral decision making. Danaher and Saetra (2023), argue that knowing someone's heartbeat is irregular gives you information that could help prevent a fatal heart attack, imposing a new moral duty for you to intervene if possible.
Confidence in Government: Confidence in government is simply the level of confidence a person has in their nations government. Theoretical frameworks for using confidence in government to predict moral uncertainty can be understood through social contracts and social learning theory. "The main goal of social contracts is to show that members of a society can comply with social rules, laws, institutions, and/or principles of that society (D'Agostino et al., 2024)." These contracts are built by institutions to promote social cohesion where citizens waive parts of their freedom in exchange for security and protection. However, this exchange of freedom for protection can create inner conflicts. For example, when governments enact new laws or policies that conflict with a groups' moral beliefs, that group has to come to terms with those new laws. Such conflicts highlight the complexity of social contracts where citizens and governments struggle to find balance between the collective good and individual rights further complicating moral values.
While social contract theory emphasizes compliance with societal rules, social learning theory extends this influence by highlighting how governments and their leaders influence moral behavior. According to a study by Mozumder, 2021, political leaders can influence the behaviour of their colleagues and citizens through role modelling (setting good examples), particularly by encouraging those who are new to politics to emulate good behaviour." Platforms such as Twitter, amplify digital communication empowering political leaders to share their ideas and opinions to a global audience. This creates a recurring flow of new information connecting diverse viewpoints that can too rapidly shift moral consensus. Groups are constantly readjusting their moral positions in relation to complying with societal rules and shifting political narratives.
Interpersonal Trust (Outgroup): Interpersonal trust (Outgroup) scale is a composite score measuring trust in others that are dissimilar to you, such as strangers or people of another religion. While, in-group trust is defined as trust in people you know personally, your neighbor, or family members. We only use outgroup trust to represent interpersonal trust because ingroup-trust is so common that it comes close to being universal. Secondly, outgroup-trust varies greatly and is only--yet not always--high when ingroup-trust is high (Delhey and Ilzel, 2012). Studying outgroup trust in relation to moral uncertainty is vital because trust in others is based upon a fundamental ethical assumption that the other person shares your fundamental values (Ulsaner, 2022). And as such, interpersonal trust and trustworthiness are understood as a specific kind of morality that acts as a guide to action (Bonowski and Minnameier, 2022). We hypothesize that greater outgroup trust produces more acceptance of differing values, thus leading to decreased moral uncertainty.
Control Variables
Social Media Usage: Social media is included as a control variable for its ubiquitous capabilities. Social media encompasses much of our daily lives, influencing from how we consume information, news and stay connected with friends and family.
Region code by U.S States: Region is chosen as a control because different areas of the United States tend to have their own sets of values. Region can encompass large groups of religion and political party affiliation. This research acknowledges the complexity of explaining moral uncertainty as there may be various cultural differences at the individual and societal level. For this reason we focus on the U.S. Midwest, Northeast, West, and South.
Age: Age is chosen as a control variable because different age groups may be more inclined to have moral uncertainty. For example, younger Americans may be morally uncertain compared to older adults who may already have established moral values or vice versa.
Education: Education level is chosen as a control variable because groups who attend college may learn different values from their University depending on their location.
Religious Service Attendance: Religion is chosen as a control variable because it is a core value in many people’s lives. In a Gallup survey, “67% of Americans who say they attend religious services weekly believe government policies affect values, compared with 60% of those who attend religious services at least monthly, and 51% of those who seldom or never attend (Gallup 2006).” Religion serves as a guide to making ethical decisions. This variable addresses religious leader influence based on church attendance. Just as government institutions can influence Americans, so can religious leaders.
Hypothesis 1
Ho: There is no a relationship between views on technology and moral uncertainty
Ha: There is a relationship between views on technology and moral uncertainty
Hypothesis 2
Ho: There is no relationship between government increases and moral uncertainty
Ha: There is a relationship between government increases and moral uncertainty
Hypothesis 3
Ho: There is no relationship between outgroup trust and moral uncertainty
Ha: There is a relationship between outgroup trust and moral uncertainty
Model building moral uncertainty using wvs
Outcome Variable Operationalization
In the World Value Survey, moral uncertainty is defined as the Degree of agreement in which one often has trouble deciding which moral rules are the right ones to follow (WVS). The decision to only use a single question was to keep the dependent variable focused on a single construct and to reduce complexity. Moreover, the DV was reversed coded. The original scale was “How much do you agree or disagree with the statement that nowadays one often has trouble deciding which moral rules are the right ones to follow? 1 = completely agree and 10 - completely disagree”. We reversed it where a 1 is low moral uncertainty and 10 is high moral uncertainty for consistency reasons.

Figure 1. Histogram of moral uncertainty
Predictor Variable Operationalization
We identified the most suitable variables from the World Value Survey that could potentially represent views on technology, confidence in government, and outgroup trust. Figure 1 illustrates which variables to integrate or exclude from our primary predictor variables -- tt1-tt4 (attitudes towards technology), tg1-tg3 (confidence in government), and ti1-ti6 (interpersonal trust/outgroup trust).

Figure 2 Corrplot of potential variables
tt1 = science and technology convenience, tt2 = science and technology future impact, tt3 = science and technology benefits, tt4 = science impact on morality, tgi = confidence in government, tg2 = confidence in political party, tg3 = confidence in parliament, ti1 = trust in neighbor, ti2 = Trust in people you know personally, ti3 = Trust in strangers, ti4 = Trust in people of another nationality, ti5 = Trust in family, ti6 = Trust in people of another religion.
Based on the corrplot matrix (Figure 2), we took out tt4 (science impact on morality) from the views on technology composite score as it had a low correlation and slightly measures a different construct compared to tt1-tt3 (science and tech convenience, future impact and benefits). We also removed ti1 (trust in neighbor), ti2 (trust in people you know personally), and ti5 (trust in family) from the inter_trust composite score since they are seen to represent a specific type of interpersonal trust which was ingroup trust. We can also see tg2 (confidence in confidence in parliament) and tg3 (confidence in political party) are highly correlated but not with tg1 (confidence in government). A plausible reason for this is that tg1 (confidence in government) can be seen to be a single group while parliament and political party can be seen as multiple groups that are tied to different groups (democratic, republican, etc.) For this reason, we will only use tg1 confidence in government as our IV for confidence in government to minimize ambiguity.
Views on technology variable consisted of three questions transformed as a composite score. All three questions were on a 10-point likert scale and in the same direction.
Science and technology are making my lives healthier, easier, and more comfortable
Because of science and technology, there will be more opportunities for the next generation
The world is better off, or worse off, because of science and technology
Confidence in government consisted of a single question on a 4 point likert scale.
Confidence in government (in your nation's capital).
Outgroup trust, consisted of three questions transformed as a composite score. To simplify the analysis, we only use the outgroup composite score as outgroup trust seems to be an area that is still under explored and more varied than ingroup trust. All scores are on a 4-point likert scale in the same direction.
Trust in people you meet for the first time
People of another religion
People of another nationality
Below we can view the updated corrplots, final variables and cronbach alpha levels for views on technology and interpersonal trust (outgroup).
Views on technology variable corrplot and cronbach alpha

Figure 3. Corrplot for views on technology
In Figure 3, we have correlations for views on technology ranging from .43-.65.
#run alpha on positive_tech
moral_comp_tech <- moral_comp %>%
select(tt1, tt2, tt3)
psych::alpha(moral_comp_tech, na.rm = TRUE, check.keys=TRUE) #Run the alpha calculation
##
## Reliability analysis
## Call: psych::alpha(x = moral_comp_tech, na.rm = TRUE, check.keys = TRUE)
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r
## 0.78 0.78 0.72 0.54 3.5 0.0078 7.3 1.9 0.53
##
## 95% confidence boundaries
## lower alpha upper
## Feldt 0.77 0.78 0.8
## Duhachek 0.77 0.78 0.8
Based on the alpha results, we have moderate alpha (.78) for the views on technology (positive_tech) composite score.
Interpersonal trust (Outgroup) variable corrplot and cronbach alpha

Figure 4. Corrplot for interpersonal trust (outgroup)
Figure 4 corrplot, we have correlations for interpersonal trust (outgroup) ranging from .47-.76.
#run alpha on inter_trust
moral_comp_inter <- moral_comp %>%
select(ti3, ti4, ti6) #removed ti1, ti2, and ti5 to represent out-group interpersonal trust
psych::alpha(moral_comp_inter, na.rm = TRUE, check.keys=TRUE) #Run the alpha calculation
##
## Reliability analysis
## Call: psych::alpha(x = moral_comp_inter, na.rm = TRUE, check.keys = TRUE)
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r
## 0.79 0.8 0.75 0.56 3.9 0.0078 2.6 0.56 0.48
##
## 95% confidence boundaries
## lower alpha upper
## Feldt 0.77 0.79 0.8
## Duhachek 0.77 0.79 0.8
In the alpha results, we have moderate alpha (0.79) for the interpersonal trust (outgroup) trust composite score.
Sample
Our sample consist of n=2,412 Americans; with a mean age of 43.5. Majority use social media daily with the distribution indicating daily = 50.1%, weekly = 15.3%, monthly = 5.8%, less than monthly = 7.2%, and never = 21.6%. Majority educated with Upper Secondary = 21.8%, post-secondary 24.8%, Bachelors = 24.9%, Masters = 11.8%, and Doctoral 5.1%.
Analyzing results using OLS and Logistic Regression
We ran an OLS regression along with an Ordered logistic regression to predict moral uncertainty. Additionally, we wanted to see if we our results were consistent in terms of significance and direction of variables for both regressions.
OLS Regression
##
## =====================================================================
## Dependent variable:
## --------------------------------------
## Moral Uncertainty in the United States
## ---------------------------------------------------------------------
## Positive views on technology -0.241*** (0.031)
## Confidence in government 0.056 (0.063)
## Interpersonal trust (Outgroup) -0.336*** (0.103)
## Age -0.019*** (0.004)
## Education Level -0.161*** (0.036)
## Social Media Usage -0.095*** (0.035)
## Religious Service Attendance 0.018 (0.026)
## U.S Region 0.256 (0.179)
## regionsSouth -0.012 (0.142)
## regionsWest -0.197 (0.154)
## Constant 9.705*** (0.403)
## ---------------------------------------------------------------------
## Observations 2,412
## R2 0.080
## Adjusted R2 0.076
## Residual Std. Error 2.687 (df = 2401)
## F Statistic 20.751*** (df = 10; 2401)
## =====================================================================
## Note: *p<0.1; **p<0.05; ***p<0.01
The variables that are significantly related to the DV (moral uncertainty) are Positive views on technology, Interpersonal trust (Outgroup), Age, Education, and social media usage. When looking at the multiple linear regression, positive views on technology, interpersonal trust (outgroup), Age, Education, and social media usage have a negative relationship with moral uncertainty.
Ordered Logistic Regression
##
## ==================================================
## Dependent variable:
## ----------------------------
## Moral_uncertainty
## ordered ordered
## logistic probit
## (1) (2)
## --------------------------------------------------
## Positive_tech -0.163*** -0.091***
## (0.021) (0.012)
##
## Confidence_government 0.025 0.017
## (0.042) (0.024)
##
## Inter_trust -0.217*** -0.127***
## (0.068) (0.039)
##
## Age -0.011*** -0.007***
## (0.002) (0.001)
##
## Education -0.084*** -0.056***
## (0.024) (0.014)
##
## Social_media_usage -0.060*** -0.037***
## (0.023) (0.013)
##
## Religious_service 0.014 0.007
## (0.017) (0.010)
##
## regions.L -0.126* -0.068
## (0.072) (0.042)
##
## regions.Q -0.145* -0.077*
## (0.075) (0.044)
##
## regions.C 0.104 0.057
## (0.078) (0.046)
##
## --------------------------------------------------
## Observations 2,412 2,412
## ==================================================
## Note: *p<0.1; **p<0.05; ***p<0.01
Based on the ordered logistic regression, positive views on technology and interpersonal trust are still significant and have the same directionality as in the OLS regression.
Test Diagnostics
We also tested that there was no multicollinearity within our model, but we did have heteroskedasticity and normality of residuals violations.
Multicollinearity test
## GVIF Df GVIF^(1/(2*Df))
## Positive_tech 1.137989 1 1.066766
## Confidence_government 1.087600 1 1.042880
## Inter_trust 1.120662 1 1.058613
## Age 1.173918 1 1.083475
## Education 1.105648 1 1.051498
## Social_media_usage 1.097803 1 1.047761
## Religious_service 1.082769 1 1.040562
## regions 1.025447 3 1.004197
There is no multicollinearity between independent variables as no values are greater than 2.
Breusch-Pagan test
##
## studentized Breusch-Pagan test
##
## data: lm4
## BP = 105.62, df = 10, p-value < 2.2e-16
This model violates heteroskedasticity assumption with (p < .05).
Shapiro and Wilks for normality of error
##
## Shapiro-Wilk normality test
##
## data: residuals
## W = 0.98, p-value < 2.2e-16
This model did violate normality based on the shapiro wilks normality test with (p < .05).

Figure 5 Q-Q Residuals
As we can see the Q-Q plot shows deviations at the beginning and end of the graph. Indicating normality issues.
Robustness check of standard errors
##
## ===================================================================
## Dependent variable:
## ---------------------------------------------
## Moral_uncertainty
## OLS coefficient
## test
## (1) (2) (3)
## -------------------------------------------------------------------
## Positive_tech -0.241*** -0.241*** -0.241***
## (0.031) (0.033) (0.033)
##
## Confidence_government 0.056 0.056 0.056
## (0.063) (0.065) (0.065)
##
## Inter_trust -0.336*** -0.336*** -0.336***
## (0.103) (0.106) (0.107)
##
## Age -0.019*** -0.019*** -0.019***
## (0.004) (0.004) (0.004)
##
## Education -0.161*** -0.161*** -0.161***
## (0.036) (0.037) (0.037)
##
## Social_media_usage -0.095*** -0.095*** -0.095***
## (0.035) (0.036) (0.036)
##
## Religious_service 0.018 0.018 0.018
## (0.026) (0.026) (0.026)
##
## regionsNortheast 0.256 0.256 0.256
## (0.179) (0.176) (0.177)
##
## regionsSouth -0.012 -0.012 -0.012
## (0.142) (0.140) (0.140)
##
## regionsWest -0.197 -0.197 -0.197
## (0.154) (0.158) (0.158)
##
## Constant 9.705*** 9.705*** 9.705***
## (0.403) (0.397) (0.398)
##
## -------------------------------------------------------------------
## Observations 2,412
## R2 0.080
## Adjusted R2 0.076
## Residual Std. Error 2.687 (df = 2401)
## F Statistic 20.751*** (df = 10; 2401)
## ===================================================================
## Note: *p<0.1; **p<0.05; ***p<0.01
Based on the robustness check, the model has been adjusted and shows standard error sizes to be similar and significance to be the same and is still robust even with heteroscedasticity and violations of normality.
Data Visualizations

Figure 6. views on tech and moral uncertainty with social media usage
In Figure 6, we can see that those who use social media daily and have negative views on technology tend to have higher moral uncertainty.

Figure 7. interpersonal trust and moral uncertainty with social media usage
In Figure 7, we can see that those who use social media daily and have low social outgroup group tend to have higher moral uncertainty.
Discussion
Our model as a whole accounted for 8% of the variance in moral uncertainty (R square = .08, F = 20.751, df = 10; 2401, p < .01). For H1: we reject the null hypothesis, based on the data there is some evidence to suggest a relationship between views on technology and moral uncertainty. This suggests that higher positive views on technology, decreases moral uncertainty. H2: We fail to reject the null hypothesis, based on our data we did not find sufficient evidence to conclude a relationship between confidence in government and moral uncertainty. H3: We reject the null hypothesis and say there is some evidence to suggest a relationship between interpersonal trust (outgroup trust) and moral uncertainty. These findings suggest as interpersonal outgroup trust increases, moral uncertainty decreases. Additionally, those who use social media daily, have negative views on technology and lower social outgroup trust tend to have higher levels of moral uncertainty. This raises concerns around the ethical implications of social media platforms and whether artificial intelligent agents will follow the same path.
The test diagnostics reveal no multicollinearity assumption violations. These findings acknowledge there is a non-linear relationship, violations of normality, and heteroskedasticity based on the Shapiro and bp test. A robustness standard error check was used for further evaluation – all standard errors, significant levels, and direction of coefficients remained stable when compared against the OLS model. The direction and significance levels also remained stable from the OLS and logistic regression.
Limitations: This model stands as a foundational research project on understanding moral uncertainty. Further research exploration is needed in this area, especially in areas relating to technology. We acknowledge there may be underlying mechanisms missing from our model. As Sagan would say, "absence of evidence is not evidence of absence"; meaning our research is limited by the available data. Additionally, some of the questions were not as straightforward as we wanted them to be as they were pre-built questions from the World Value Survey. For example, when interpreting the views on technology it is difficult to infer what type of technology Americans were referring to. As a result, interpretation of results tend to be broader in scope. There is a high possibility that results may be different based on specific technologies, government laws, and specific types of interpersonal outgroup interactions. This study does not capture moral uncertainty across different cultures outside of the U.S. For the views on technology variable, the distribution was slightly positively skewed (mean = 7.3 out of 10). Since our data was ordinal and positive skewed, we did not use a log transformation. The implication of this is we can only generalize the findings towards those with higher positive views on technology. The distribution for interpersonal outgroup trust had a slight positive skew with people between not trusting other to somewhat trust (mean = 2.55).
Future research: This WVS dataset took place back in 2017. During this time algorithms from social media platforms were dominate and one of the main sources of information, electric vehicles were on the rise as a cleaner avenue towards transportation, and AI was being used in healthcare technology for the first time. Further exploration towards predicting moral uncertainty may be focusing in one of these areas in greater detail such as in artificial intelligence/large language models. This is particularly relevant as LLMs were not fully commercialized until 2022. Alternative variables in predicting moral uncertainty could be moral philosophy frameworks such as utilitarianism vs. deontology. Do either group have more moral uncertainty than the other? Or looking into how moral uncertainty may differ across cultures globally.
References
Bonowski, T., & Minnameier, G. (2022). Morality and trust in impersonal relationships. Journal of Economic Psychology, 90, Article 102513. https://doi.org/10.1016/j.joep.2022.102513
Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company.
Danaher, J., & Sætra, H. (2023). Mechanisms of techno-moral change: A taxonomy and overview. Ethic Theory Moral Prac, 26, 763–784. https://doi.org/10.1007/s10677-023-10397-x
D'Agostino, F., Gaus, G., & Thrasher, J. (2024). Contemporary approaches to the social contract. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Spring 2024 ed.). Stanford University. https://plato.stanford.edu/archives/spr2024/entries/contractarianism-contemporary/
Delhey, J. & C. Ilzel (2012). “Generalizing Trust: How Outgroup-Trust Grows Beyond Ingroup-Trust.” World Values Research 5(3): 46-69.
Jenkins, Henry. (2006). "Searching for the Origami Unicorn: The Matrix and Transmedia Storytelling." Convergence Culture: Where Old and New Media Collide. Pages 95‐134.
Jones, J. M. (2024). Church attendance has declined in most U.S. religious groups: Three in 10 U.S. adults attend religious services regularly, led by Mormons at 67%. [Gallup].
Kugler, L. (2022, April 1). Technology's impact on morality: Leading technologists and thinkers are concerned about technology's impact on our ethical thinking. Communications of the ACM. https://cacm.acm.org/news/260176-technologys-impact-on-morality/fulltext
Lockhart, T. (2000). Moral Uncertainty and Its Consequences. Oxford University Press.
Nickel, P. J., Kudina, O., & van de Poel, I. (2022). Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap. Perspectives on Science, 30(2), 260-283. https://doi.org/10.1162/posc_a_00414
MacAskill, W., Bykvist, K., & Ord, T. (2020). Moral uncertainty. Oxford University Press.
Marcus, A. (2013, January). Effect of convergence culture on spectatorship. [Chapman University]
Martinho, A., Kroesen, M., & Chorus, C. (2021). Computer Says I Don't Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence. Ethics and Information Technology, 23(3), 727-739. https://doi.org/10.1007/s10676-021-09583-1
Mozumder, N. A. (2022). Can ethical political leadership restore public trust in political leaders? Public Organization Review, 22, 821-835. https://doi.org/10.1007/s11115-021-00536-2
Pew Research Center, Feb. 24, 2023. “The Future of Human Agency”
Tarsney, Christian, Teruji Thomas, and William MacAskill, "Moral Decision-Making Under Uncertainty", The Stanford Encyclopedia of Philosophy (Spring 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/>.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
Uslaner, E. M. (2002). The moral foundations of trust. Cambridge University Press.