Artificial Intelligence and Ethics: A Call for Dialogue
Artificial Intelligence and Ethics: A Call for Dialogue
Dear STEM-net Horizons Community,
As we reflect on the groundbreaking achievements recognized by the 2024 Nobel Prizes, we are reminded of how science continuously shapes the world around us. Last year’s laureates in Physics (the Physics Nobel Prize was awarded jointly to Prof. John J. Hopfield and Prof. Geoffrey Hinton) and in Chemistry (one half was awarded to Prof. David Baker and the other half jointly to Dr. Demis Hassabis and Dr. John Jumper) have pushed the boundaries of human knowledge, inspiring us to think critically about the future of technology, particularly in the realm of Artificial Intelligence (AI).
AI has become an integral part of our daily lives, driving advancements in many scientific fields such as medicine, engineering, automation and decision-making. With its rapid evolution comes an urgent need to address its ethical implications.
- How do we ensure that AI remains a tool for progress rather than a force for inequality?
- What ethical responsibilities do scientists and engineers have when developing AI systems?
- Where should we draw the line between human and machine intelligence?
As President of the STEM-net Horizons, I invite you all to participate in an open ideas exchange where we can critically discuss these pressing issues. Whether you are passionate about AI’s technical aspects, its societal impact or the philosophy behind machine ethics, your voice matters.
Stay tuned for upcoming discussions, events, and collaborations! I encourage you to share your thoughts by email in stemnethorizons@gmail.com and contribute to articles that can drive meaningful conversations. Your comments and articles will be posted on the Student Forum after a basic plagiarism check.
Together, let’s push the frontiers of knowledge and responsibility.
Best,
Bakos Christos
Founder & President, STEM-net Horizons
VIP COMMENTS

Dimitris Papastergiou
Greek Minister of Digital Governance
Dear Christos Bakos,
As the Greek Minister of Digital Governance, I would like to extend my heartfelt congratulations to you and your STEM-net Horizons global student community for initiating this important dialogue on Artificial Intelligence and Ethics. Your dedication to discussing such a transformative global issue is truly commendable.
The role of AI in shaping our future cannot be overlooked and it is inspiring to see young minds like yours leading the conversation. By fostering collaboration and critical thinking, you are not only contributing to the advancement of technology but also ensuring that ethical and responsible innovation remains at the forefront.
I applaud your vision and commitment to addressing global challenges through STEM and I wish you and your community every success in this endeavor. May your discussions be insightful, impactful and a driving force for positive change.
Congratulations and best wishes for a successful initiative!
Warm regards,
Dimitris Papastergiou
Greek Minister of Digital Governance
COMMENTS:
Date: March 13, 2025
The main question I will answer is how to make AI a tool for progress, not inequality. It seems to me that AI should be open and free for everyone. Maybe it's worth creating a certain censorship that will help prevent mistakes. But, a censorship will cause lots of problems, because it may ruin something useful.
Also, the author of AI can stop the censorship, that makes AI kinda different between the author and the user.
So, I think we should make AI with build-in censorship, that makes using it without censorship actually impossible.
That’s all.
Mirkin Alex
STEM-net Horizons Ambassador of Russia
Date: March 15, 2025
I think that if an AI tool with built-in censorship is to be used then we need to answer the following question: who will control it? In my opinion, it is better to be controlled by an independent body such as an AI Ethics Council (like the UN Human Rights Council but specifically focused on AI). Also, only the users (not the author!) should have the ability to apply various filters to this AI tool based on personal preferences (such as culture, religion etc).
A. Papadopoulos
Member of the Greek STEM-net Horizons Chapter
Date: March 19, 2025
From the technical point of view, we have also to think if the build-in censorship will be word-based or intention-based. If it is word-based then it may be relatively easy to avoid it! Intention-based censorship is much better and reliable, but its implementation is challenging!
C. Tsiligiris
Member of the Greek STEM-net Horizons Chapter
Date: March 25, 2025
The question that interested me the most was, "Where should we draw the line between human and machine intelligence?" Personally, I believe that the line between human and machine intelligence mainly lies in understanding, creativity, and self-awareness. While machines excel in works like processing lots of data and recognizing patterns easily, human intelligence is distinguished by emotions, intuition, consciousness, and the ability to derive meaning beyond logic. All in all, what I think is that the creativity of the human being, and sometimes its spontaneity, cannot at all, as for now, be copied by any AI machine.
E.S. Kyrinas,
Member of the Greek STEM-net Horizons Chapter
Date: March 26, 2025
It may be interesting to see what AI has to say about all this:
ChatGPT’s Answer:
These are all profound questions, and each touches on important ethical, societal, and technological concerns. Here's how I might think through each one:
1. How do we ensure that AI remains a tool for progress rather than a force for inequality?
To ensure AI remains a tool for progress and not a force for inequality, we must prioritize fairness, transparency, and accessibility in AI development. First, it's crucial to address biases in AI systems, whether they are introduced by biased data or biased algorithms, as these can exacerbate existing inequalities. Diverse teams working on AI and a focus on ensuring AI is used for public good, particularly in healthcare, education, and governance, can help reduce the risk of deepening divides. Additionally, policies that promote equitable access to AI technologies—so that its benefits are shared widely rather than concentrated in the hands of a few—will be essential. In short, continuous oversight, public awareness, and inclusive decision-making can ensure AI fosters progress while minimizing negative impacts on inequality.
2. What ethical responsibilities do scientists and engineers have when developing AI systems?
Scientists and engineers have an immense ethical responsibility when developing AI systems, as their creations can have significant impacts on individuals, communities, and society as a whole. They must strive to ensure the safety, fairness, and transparency of AI technologies. This means proactively working to eliminate biases, ensuring systems don't inadvertently harm vulnerable groups, and promoting ethical guidelines for development. They should also consider the long-term implications of AI, such as how it may affect employment or privacy, and weigh the potential benefits against the risks. It’s crucial for AI developers to be aware of the societal context in which their systems will be deployed and to work toward creating systems that benefit humanity as a whole rather than a select few.
3. Where should we draw the line between human and machine intelligence?
Drawing the line between human and machine intelligence is tricky, but it's important to acknowledge that AI, no matter how sophisticated, operates fundamentally differently from human cognition. AI systems, while powerful in specific domains (like pattern recognition or processing large datasets), do not have consciousness, emotions, or a sense of morality. They also lack the rich, nuanced understanding of the world that humans develop through lived experiences. The line between human and machine intelligence should be drawn not just in terms of capabilities but also in terms of ethical and emotional dimensions, recognizing that AI is a tool designed to assist human progress rather than replace human judgment or experience. The distinction is important to prevent dehumanization and ensure that we don’t over-rely on machines in areas requiring empathy, creativity, or moral reasoning.
Overall, these questions highlight the need for a balance between technological advancement and careful consideration of its social and ethical implications. Ensuring AI is a force for good requires an ongoing, collective effort.
Bakos Christos
President of STEM-net Horizons &
STEM-net Horizons Ambassador of Greece
Date: March 27, 2025
I believe that the following research article will help our community to gain a bird eye view of AI ethics. It offers a comprehensive overview of the AI ethics field, including a summary and analysis of AI ethical issues, ethical guidelines and principles, approaches to address AI ethical issues and methods to evaluate the ethics of AI technologies:
Changwu Huang et al. An Overview of Artificial Intelligence Ethics. IEEE Transactions on Artificial Intelligence, Vol. 4, No. 4, August 2023.
You can find the full article in the following link:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9844014
Alternatively, if you want to download more IEEE papers on this subject, follow the next steps:
- Go to the STEM Open Access Services.
- Select the IEEEXplore link.
- Write the phrase “Artificial Intelligence and Ethics”.
- Press Search.
It would be awesome if everyone, not just the Ambassadors but also the Chapter members, could download interesting journal papers from the STEM Open Access Services aiming at participating in a summer reading list and a virtual journal club.
Bakos Christos
President
Date: March 29, 2025
Where should we draw the line between human and machine intelligence?
It makes the most sense to draw the line in situations where moral judgment and human experiences are essential. Machines analyse data, recognise patterns and many other things but what they can't do is replicate understanding, emotions and human experiences. While AI can support us and enhance decision-making, it shouldn't replace human judgment in areas like ethics, relationships, and justice.
Ayush Shah
STEM-net Horizons Ambassador of Australia
Date: April 14, 2025
I find the summer reading list and the virtual journal club to be very interesting initiatives, and I would love to participate! Will there be an official announcement?
E. Karali
Member of the Greek STEM-net Horizons Chapter
Date: April 15, 2025
Yes, an official announcement will be made depending on the actual level of participation. Please stay tuned!
Christos Bakos
President
Date April 29, 2025
Dear STEM-net Horizons Community,
I believe it is important to perceive AI as a tool — a very powerful one, but still a tool — much like numerical solvers of differential equations, calculators, internal combustion engines, or dynamite. From this perspective, most ethical concerns, such as AI being a "force for inequality" or questions about the "ethical responsibilities" of developers, are part of a broader discussion about the ethics of technology use as a whole. Just as the internal combustion engine can be used to increase food production through tractors or cause harm through tanks, most modern technologies serve both constructive and destructive purposes. Unfortunately, as history shows, this depends very little on the scientists and engineers who design them. While I would like to believe that the scientific community could make a difference on its own, especially as various open-source, community-serving projects have arisen in recent years, I remain pessimistic. Decisions largely rest with governments and policymakers, and it is toward them that the scientific community should direct its concerns and suggestions for solutions.
Regarding the question, "Where should we draw the line between human and machine intelligence?", I believe it remains unanswerable at this point, as we do not yet understand human intelligence itself. One might ask whether moral judgment is simply something we learn while growing up — and if so, why couldn't AI learn the same? The earlier comment citing consciousness and emotions as distinctions between humans and machines seems to me an evasion rather than an explanation. It resembles saying, "The difference between quantum mechanics and classical mechanics is that one is quantum and the other is not," which provides no real insight. Since we do not fully understand half of the problem, I believe the question is ill-posed. This subject presents a very interesting research topic that is not yet well understood and should be pursued using scientific methods, without hiding behind vague concepts like "consciousness" and treating them as answers.
Jan Turczynowicz
STEM-net Horizons Ambassador of Poland
Date: May 18, 2025
Dear STEM-net Horizons Community,
I think one of the main problems of using AI in more areas is "Who is to blame for potential negative consequences that AI use may cause?"
This can refer to a highly specialized AI that is trained to assist humans in a singular task, and it can refer to an AI used to suggest solutions to prompts. Of course, we first have to establish which area we are talking about using AI in. Some are relatively harmless. Your average ChatGPT prompt about a homework assignment won't have any irreparable damage done to anyone. The situation gets a bit more serious when we look at those areas where a mistake could lead to creating or aggravating health issues. Or the justice system, where it could send someone to jail.
So, for example, say an AI software was trained specifically to analyze blood samples and weed out those who have a disease-specific component in them. This is the AI's only job, and it has been tested for a couple of years. Apparently flawless, therefore believed reliable. What happens when it does make a mistake and a person who needed the treatment didn't get it? Whose fault is that? Is it the fault of the creator of the AI or of the person who used it?
This is a situation we won't be able to avoid. Every time a vaccine is made, it is tested for a long period of time; still, there are some people who experience unexpected negative consequences. These are vaccines; we have been doing them for a long time. How are we supposed to figure out how long an AI should be tested before labeling it as reliable in such delicate areas as medicine? And then what of the exceptions? A solution would be to warn all patients beforehand of that possibility.
For contrast, if this analysis and categorization had been conducted by doctors or medical researchers, the blame goes to them. We have legal procedures in place, already established, for those cases. All medical professionals are aware of them and have signed agreements to be subjected to them if needed.
Yes, AI may speed processes up, but it can't take the blame for its mistakes. We can hardly effectively punish an AI. So I think to answer the question "Where should we draw the line between human and machine intelligence?" I would suggest drawing the line at areas where AI may accidentally cause irreparable damage to a life form or the legal status of one. At least, until the appropriate legal procedures are in place.
Kyara Maria Breidenstein Solis
STEM-net Horizons Ambassador of Mexico
Date: June 5, 2025
Artificial Intelligence has undoubtedly emerged in numerous aspects of our life. From helping inform markets by identifying and analyzing trends to making predictions about the behaviour of physical/biological systems, the strength of Machine and Deep Learning models is increasingly used to refine predictive analytics and revolutionize forecasting. However, it is very important to use such models ethically. Large Language Models (LLMs) and advanced AI algorithms used by several companies often infringe user privacy by implementing sensitive user data to train their models. If not protected correctly, data leakages can quickly escalate to unauthorized access to this data, which is detrimental to users’ privacy. Similarly, the use of AI models and the need of large training datasets has culminated in an increase in selling of user data to third parties, who use them for their own algorithms. Overall, to ensure the use of AI is ethical, it is important to ensure adherence to legal frameworks and regulations that protect user data and privacy, which is becoming increasingly vulnerable as we get more immersed into the age of AI.
Filippos Akylas Kaloudis
Undergraduate Student - Imperial College
Member of the Greek STEM-net Horizons Chapter
Date: July 15, 2025
Artificial Intelligence (AI) is now a driving force in science and society. This was highlighted in the 2024 Nobel Prizes, which honored major contributions in Physics and Chemistry, showing how AI is transforming both everyday technologies and scientific discovery. Neural networks power image recognition and language tools, while protein prediction systems like AlphaFold2 accelerate drug development and the design of new molecules. Yet with this progress come urgent ethical challenges.
To keep AI a force for good, fairness and accessibility are essential. If advanced AI remains controlled by a few corporations or nations, inequality will deepen. Developers must address bias, protect privacy, and ensure transparency so AI decisions can be trusted.
I strongly believe that the future of AI depends on how we guide it! The Nobel Prizes of 2024 showcase its power to expand knowledge and improve lives, but they also remind us that innovation must remain grounded in ethics. By keeping humans—not machines—at the center, we can ensure AI strengthens society while preserving the values that define us.
Grigoria Kiosse
Member of the Greek STEM-net Horizons Chapter
Date: September 7, 2025
Dear STEM-net Horizons community,
The questions posed cannot be answered with a single sentence, but rather with a concentrated view of the intentions and possibilities of artificial intelligence itself. But let’s ask one question at a time.
In the current situation, AI is a very powerful, sometimes life-easing tool used in multiple domains of science for the purpose of progress. But why is the phrase “force for inequality” concerning? The answer lies beneath AI’s own capabilities. Over the years, AI may grow to become something even more than a tool, the aforementioned force possibly. Whether AI is used as a tool for progress or a force for inequality is something that depends on its users, its controllers. AI can serve both purposes, but its impact depends on our intentions. So, to answer, preventing it from becoming a force for inequality relies, perhaps not solely but to a large extent, on us.
The responsibility of a scientist during the creation of such tools can be very impactful. They must be aware that due to its power, AI could be used as a “force for inequality” and might even determine its own actions with self-control. To sum up, scientists and engineers must always know what their development is being used for in order to prevent any harmful outcomes that may impact humans. Though this may seem somewhat optimistic, since such powerful tools are and have been controlled largely under governmental policies.
The third question, “Where should we draw the line between human and machine intelligence” is rather interesting. Recently, there have been AI machines that managed to pass Alan Turning’s test, the test that distinguishes humans from machines. However, the test’s distinction is based on human behavior, emotions and consciousness, concepts not yet fully understood by us. The way the human brain works, even now in the 21st century, remains mostly unknown and mysterious. If we understood the different functions and methods our brain uses to process information, we could establish more reliable criteria for this distinction. So, to conclude, this question is indeed unanswerable. We can only draw a line based on our social experiences and responses to other human beings to understand whether something is a machine or a person.
Kaimakis Alexandros
Undergraduate student- Aristotle University of Thessaloniki (AUTh)
Member of the Greek STEM-net Horizons Chapter
Date: October 4, 2025
AI must be developed and applied in ways that promote fairness, inclusivity, and accessibility to ensure it remains a tool for progress rather than inequality. This requires addressing issues such as bias in algorithms, unequal access to technology, and the potential for job displacement. By prioritizing transparency, accountability, and equitable distribution of AI’s benefits, societies can work toward ensuring that advancements in AI support all individuals rather than widening existing social divides.
Scientists and engineers play a crucial role in shaping the ethical foundations of AI. They hold the responsibility of building systems that prioritize safety, fairness, and respect for human dignity. This includes implementing ethical design principles, conducting rigorous testing to prevent harmful consequences, and engaging in ongoing dialogue with policymakers and the public. Ultimately, the boundary between human and machine intelligence should be carefully defined to preserve human autonomy and decision-making while leveraging AI as a powerful tool for enhancing, rather than replacing, human capabilities.
Giannis Kalatzis
Postgraduate student-Physics
Member of the Greek STEM-net Horizons Chapter
Date: October 10, 2025
Artificial intelligence is undoubtedly the groundbreak of this decade. Owing to its widespread usage there are many fears about the future risks of AI. For me the biggest fear is the following: "Will AI substitute human workers?" The nobel prizes of physics and chemistry in 2024 prove that our conception of “physics” is evolving: physical principles are not just about atoms or fields, but also about information, computation, complexity partially thanks to the evolvement of AI. So, I would like to congratulate the members of Greek STEM-net Horizons Chapter on their work.
Georgios Tsakmakis
Undergraduate student in Medicine, AUTh











