Algorithmic bias occurs when an artificial intelligence system or algorithm produces results that are systematically unfair or unbalanced. This bias often arises from flawed, incomplete, or unrepresentative data used to train the algorithm, or from biased assumptions in its design. As a result, certain groups may be favored or disadvantaged in automated decisions, such as hiring, lending, or law enforcement. Recognizing and addressing algorithmic bias is crucial to ensure fairness, transparency, and ethical outcomes in technology-driven processes.