Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

How are Governments Using Artficial Intelligence? How are They Misusing AI?  featured image

How are Governments Using Artficial Intelligence? How are They Misusing AI?

There has been a lot of talk about artificial intelligence – who is using it, how it works, and what it will lead to. Rensselaer Polytechnic Institute professor James Hendler – who was recently named to the newly formed Association for Computing Machinery (ACM) Technology Policy Council – penned a piece for The Conversation outlining the danger A.I. could pose to American society if there is not enough oversight. Here are some excerpts: “Artificial intelligence systems can – if properly used – help make government more effective and responsive, improving the lives of citizens. Improperly used, however, the dystopian visions of George Orwell’s “1984” become more realistic. On their own and urged by a new presidential executive order, governments across the U.S., including state and federal agencies, are exploring ways to use AI technologies. As an AI researcher for more than 40 years, who has been a consultant or participant in many government projects, I believe it’s worth noting that sometimes they’ve done it well – and other times not quite so well. The potential harms and benefits are significant...” “...Other government uses of AI are being questioned, too – such as attempts at 'predictive policing,' setting bail amounts and criminal sentences and hiring government workers. All of these have been shown to be susceptible to technical issues and data limitations that can bias their decisions based on race, gender or cultural background. Other AI technologies such as facial recognition, automated surveillance and mass data collection are raising real concerns about security, privacy, fairness and accuracy in a democratic society....” “...As the use of AI technologies grows, whether originally well-meant or deliberately authoritarian, the potential for abuse increases as well. With no currently existing government-wide oversight in place in the U.S., the best way to avoid these abuses is teaching the public about the appropriate uses of AI by way of conversation between scientists, concerned citizens and public administrators to help determine when and where it is inappropriate to deploy these powerful new tools.” Are you a reporter covering AI? Then let us help with your stories and ongoing coverage. Professor James Hendler is the Director of the Institute for Data Exploration and Applications at Rensselaer Polytechnic Institute. He is available to speak with media – simply click on his icon to arrange an interview.

James Hendler profile photo
2 min. read
AI and online extremism featured image

AI and online extremism

Responding to increased pressure, Facebook has "doubled down" on identifying and removing posts by online extremists and the groups they use to share content.  As noted in a recent NBC News article, these increased efforts include using artificial intelligence and machine learning to proactively identify and remove posts and groups that break the rules, and promoting tools that would hold group administrators more accountable for posted content." Megan Squire is a professor of computer science who has conducted extensive research tracking connections between online hate groups and how they leverage social media platforms to recruit new members and spread propaganda.  Squire told NBC News that she's skeptical that even these more high-tech efforts will meaningfully curtail the activity of online extremists.  “Same stuff, different day,” Squire told NBC News. “Just in the past month, I reported groups calling for a global purge of Islam, extermination of people based on religion, and calling for violence through a race war, and Facebook’s response was that none of these groups was a violation of community standards.” If Dr. Squire can assist with your reporting about social media and online extremism, please reach out to Owen Covington, director of the Elon University News Bureau, at ocovington@elon.edu or (336) 278-7413. Dr. Squire is available for phone, email and broadcast interviews.

1 min. read
Meet Your Newest Job Recruiter, the Algorithm – let our experts explain featured image

Meet Your Newest Job Recruiter, the Algorithm – let our experts explain

Equal employment opportunities may not be part of a computer’s calculations, but one engineer from is trying to change that. When you apply for a job, chances are your resume has been through numerous automated screening processes powered by hiring algorithms before it lands in a recruiter’s hands. These algorithms look at things like work history, job title progression and education to weed out resumes. There are pros and cons to this – employers are eager to harness the artificial intelligence (AI) and big data captured by the algorithms to speed up the hiring process. But depending on the data used, automated hiring decisions can be very biased. “Algorithms learn based on data sets, but the data is generated by humans who often exhibit implicit bias,” explains Swati Gupta, an industrial engineering researcher at Georgia Tech who’s work focuses on algorithmic fairness. “Our hope is that we can use machine learning with rigorous mathematical analysis to fix the bias in areas like hiring, lending and school admissions.” But as algorithms harness speed and efficiency – how can they be adjusted to include and consider race, gender and other human factors?  It’s an area Dr. Gupta has been researching and refining. If you are a reporter or journalist looking to cover this topic – that’s where our experts can help. Dr. Swati Gupta is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. Dr. Gupta is an expert in the areas of optimization, machine learning, and bias and fairness within the AI sphere. She is available to speak with media regarding this topic - simply click on her icon to arrange an interview.

Swati Gupta profile photo
2 min. read
The Gun Control Debate is at a Stalemate. Can Smarter Weapons Help to Solve it? featured image

The Gun Control Debate is at a Stalemate. Can Smarter Weapons Help to Solve it?

The gun control debate is at a stalemate. America seems incapable of finding common ground on background checks, waiting periods, weapons registries and restrictions or bans on select weapons. Shooting after shooting has resulted in decades of debate but little substantive change. But Professor Selmer Bringsjord from Rensselaer Polytechnic Institute, who recently weighed in on the issue, presented a concept that could turn the entire topic on its head by using artificial intelligence. Bringsjord accepts that America won’t get rid of its guns – so why not just make our guns smarter?  Ethically AI-enabled weapons can put American politicians back to work by shifting the debate from the weapons we should ban, to the targets we will accept. Do we allow guns to kill school children, shoppers, concert-goers? The technology of ethical AI changes the conversation.   His idea was just recently published in the Times Union: “Yet there is a solution, a technological alternative to the fruitless shouting match between politicians: namely, AI — of an ethical sort. Guns that are at once intelligent and ethically correct can put an end to the mass-shooting carnage. Consider the rifle apparently used by the human killer in the El Paso Walmart shooting. But now suppose that time is turned back to before his shots were fired on Aug. 3, and that his rifle, radically unlike the stupid one that killed, is both intelligent and ethical. This alternate-future rifle would know that it's approaching the Walmart by car and would accordingly know that it has no business being used anytime soon. Move forward in time a bit; the rifle is now in the hands of the aspiring, ear-muffed killer outside his car; but his weapon has fully disengaged itself and is locked into a mode of utter uselessness with the finality of a sealed bank vault. On the other hand, the guns in the hands of law enforcement officers who have dashed on scene know in whose hands they rest, and accordingly know that if they are trained on the would-be killer, they have every right to work well, if this criminal reveals some new threat. Notice: If people who don't actually pose a threat sufficient to warrant being shot by police can't be shot by smart, ethical guns, a fact that could lead to the welcome evaporation of a different but also vitriolic political shouting match.” -Times Union, August 16, 2019 Could AI be the answer to America’s gun problem? It’s truly a new perspective on an old issue. If you are a reporter covering this topic, let our experts help with your story. Dr. Selmer Bringsjord is the Chair of the Department of Cognitive Science expert in logic and philosophy, specializing in AI and reasoning. Dr. Bringsjord regularly speaks with media about AI and is available to speak about the concept of intelligent, ethical guns. Simply click on his icon to arrange an interview.

Selmer Bringsjord profile photo
2 min. read
Let our experts explain the value of AI and Process Automation. Join us at Directions 2019 on May 02 to find out! 
 featured image

Let our experts explain the value of AI and Process Automation. Join us at Directions 2019 on May 02 to find out!

Just how big of a deal is AI? At this year’s Directions 2019, IDC Canada experts will be speaking to a variety of topics that are reshaping the digital visions and tactics modern companies are using to compete. Explore how AI encompasses a huge spectrum of technologies for the enterprise and how at the center of it all is data.   On May 02, join Warren Shiau, Research Vice-President with IDC Canada as he presents a highly anticipated talk on AI: Process Animation at 11:20 AM. Warren will look at what’s being adopted by Canadian enterprise under the banner of AI; and why AI can generate significant business value even in the absence of large data science teams and enterprise-wide high-quality data. Deep learning may rule the future but “small AI” targeting things like process automation rules the day. Organizations are rethinking digital transformation – join us May 02 to learn more. Location: St. James Cathedral Centre: Snell Hall, 65 Church Street | Toronto Date: May 2, 2019 Time: 8:00 AM - 8:30 AM - Registration & Networking Breakfast | 8:30 AM - 3:30 PM Conference Program Register Today before it's too late!  If you're a member of the media and would like to attend this event, please contact Cristina Santander at csantander@idc.com

1 min. read
What's in Store for the ICT Industry in 2019? featured image

What's in Store for the ICT Industry in 2019?

Get in the front of 10 key technology predictions we expect to see in 2019 and beyond! At this year's IDC Canada Predictions 2019 webcast, IDC Canada's Lars Goransson and Tony Olvet discussed what's in store for the Canadian ICT industry, including the massive jump in technology trends and the pace of innovation that's expected for 2019 and beyond. Watch the replay today! The forces of multiplied innovation are powering up, with an explosion of digital innovation platforms and ecosystems, enabled by a new wave of application deployment, AI, trust and ambient interface technologies, all built on a new generation of the cloud. How will enterprises race to reinvent their IT organizations and what are the IT skills needed to grow and compete in the years ahead? Hear more from Tony Olvet, Lars Goransson and a selected group of IDC Canada analysts at this year's Predictions 2019 Webcast. Don't miss out! Watch now the replay and access the presentation deck here: https://bit.ly/2yACbqw

1 min. read