Saturday, March 25, 2023
  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Advertise
Digital Finance Security
  • Home
  • Security Alerts
    • Money Laundering with crypto
    • Minting and Supply
    • Crypto scams
  • Artificial Intelligence
  • Programming
  • Regulation and CBDCs
  • Latest
No Result
View All Result
  • Home
  • Security Alerts
    • Money Laundering with crypto
    • Minting and Supply
    • Crypto scams
  • Artificial Intelligence
  • Programming
  • Regulation and CBDCs
  • Latest
No Result
View All Result
Digital Finance Security
Home Artificial Intelligence

Explanations of artificial intelligence: Author proposes model that highlights evidence of fairness

Madeline Haze by Madeline Haze
February 16, 2023
in Artificial Intelligence, Finance & Technology
A A
#image_title

#image_title

Share on FacebookShare on TwitterShare on LinkedinEmailWhatsappTelegram
ai and business
Credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI) is used in a variety of ways, such as building new kinds of credit scores that go beyond the traditional FICO score. However, while these tools can powerfully and accurately predict outcomes, their internal operations are often difficult to explain and interpret. As a result, there is a growing demand in ethics and regulation for what is called explainable AI (xAI), especially in high-stakes domains.

In a new article, a professor at Carnegie Mellon University (CMU) suggests that explanations of AI are valuable to those affected by a model’s decisions if they can provide evidence that a past adverse decision was unfair. The article is published in Frontiers in Psychology for a special issue on AI in Business.

“Recently, legislators in the United States and the European Union have tried to pass laws regulating automated systems, including explainability,” says Derek Leben, Associate Teaching Professor of Ethics at CMU’s Tepper School of Business, who authored the article. “There are several existing laws that impose legal requirements for explainability, especially with respect to credit and lending, but they are often difficult to interpret when it comes to AI.”

In response to demands for explainability, researchers have produced a large set of xAI methods in a short period of time. These methods differ in the type of explanations they can generate, so Leben says we must now ask: What type of explanations are important for an xAI method to produce?

In the article, Leben identifies three types of explanations. One type explains a decision by providing the relative importance of its causal features (for example, “Your income of $40K was the most significant factor in your rejection”). Another type explains a decision by offering a counterfactual change in past states that would have led to a better outcome (for example, “If your salary had been higher than $50K—all else being equal—you would have been approved”). The third type provides practical recommendations on what individuals can do to improve their future outcomes (for example, “The best way for you to improve your score is to increase your savings by $5K”).

While there has been much debate about what type of explanation is most important, Leben supports xAI methods that provide information about counterfactual changes to past states based on what he calls the evidence of fairness view. In this view, individuals affected by a model’s decisions (model patients) can and should care about explainability as a means to an end, with the end verifying that a past decision treated them fairly.

Counterfactual explanations can provide people with evidence that a past decision was fair in two ways. The first is to demonstrate that a model would have produced a beneficial decision under alternative conditions that are under the model patient’s control (which the author calls positive evidence of fairness). The second is to show that a model would not have produced a beneficial decision when irrelevant behavioral or group attributes are altered (which Leben terms negative evidence of fairness).

Put another way, Leben suggests that xAI methods should be capable of demonstrating that a decision was counterfactually dependent on features that were under the applicant’s control (e.g., late payments) and not counterfactually dependent on features that are discriminatory (e.g., race and gender).

Leben says his work has practical implications. Not only can these ideas inform legislative efforts and industry norms around explainability, but they can also be used in other domains. For example, engineers designing AI models and their associated xAI methods can use the evidence of fairness view to help evaluate them.

More information: Derek Leben, Explainable AI as evidence of fair decisions, Frontiers in Psychology (2023). DOI: 10.3389/fpsyg.2023.1069426

Research provided by Tepper School of Business, Carnegie Mellon University
Previous Post

What is ChatGPT: Here’s what you need to know

Next Post

Users say Microsoft’s Bing chatbot gets defensive and testy

Related Posts

#image_title
Artificial Intelligence

How AI could upend the world even more than electricity or the internet

March 20, 2023
#image_title
Artificial Intelligence

A new method to boost the speed of online databases

March 14, 2023
#image_title
Artificial Intelligence

A new and better way to create word lists

March 14, 2023
#image_title
Artificial Intelligence

Better transparency: Introducing contextual transparency for automated decision systems

March 14, 2023
Load More
Next Post
#image_title

Users say Microsoft's Bing chatbot gets defensive and testy

#image_title

Slicing capacity-centered mode selection for network-assisted full-duplex cell-free distributed massive MIMO systems

POPULAR

  • #image_title

    Laundering on Ethereum mainnet

    6 shares
    Share 2 Tweet 2
  • Flashloan Attack Alert – ETH mainnet

    2 shares
    Share 1 Tweet 1
  • Speculation mounts that U.S. banking crisis was a ploy to push CBDCs

    1 shares
    Share 0 Tweet 0
  • 1,000,000,000 USDT minted on Tron network

    3 shares
    Share 1 Tweet 1
  • USDT Minting Activity

    9 shares
    Share 4 Tweet 2

digitalfinsec.com




201 N. Union St,

Suite 110,

Alexandria, VA 22314, USA





info

  • Advertise
  • Terms of Service
  • Privacy Policy
  • Cookie Policy

partners

Trade stocks today

Trade crypto 20% off today

Trade fractional shares today

Get your hardware wallet today

Analyze stocks like a pro

Recent Alerts

Flashloan Attack Alert – ETH mainnet

Laundering on Ethereum mainnet

Flashloan Attack Alert – ETH mainnet

Flashloan Attack Alert – ETH mainnet

Flashloan Attack Alert – ETH mainnet

Flashloan Attack Alert – ETH mainnet

© 2023 DigitalFinSec.com by Digital Finance Security, LLC - All rights reserved.

No Result
View All Result
  • Home
  • Security Alerts
    • Money Laundering with crypto
    • Minting and Supply
    • Crypto scams
  • Artificial Intelligence
  • Programming
  • Regulation and CBDCs
  • Latest

--

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy here and our Cookie Policy here.
Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?