In this paper, we address the challenge of state space explosion in the analysis of large stochastic models by advancing the lumpability approach, a state aggregation technique that exploits structural regularities in Markov chains to efficiently compute stationary performance indices. We generalize the concept of proportional lumpability, which extends the well-known notion of lumpability and allows for the exact computation of stationary performance indices, in contrast to quasi-lumpability, which only provides bounds. Proportional lumpability is achieved through a perturbation of the original Markov chain’s transition rates, guided by a proportionality function. We further explore the idea of perturbing Markov chains through left and right multiplications by square matrices, introducing the concepts of left and right-perturbed Markov chains, which preserve the original model’s topology. For left-perturbed Markov chains, the steady-state distribution of the original chain can be derived by multiplying the probability vector of the perturbed chain by the square matrix used to define the perturbation. In contrast, for right-perturbed Markov chains, the steady-state probability distribution remains unchanged.

Generalized Proportional Lumpability

Piazza C.;
2026-01-01

Abstract

In this paper, we address the challenge of state space explosion in the analysis of large stochastic models by advancing the lumpability approach, a state aggregation technique that exploits structural regularities in Markov chains to efficiently compute stationary performance indices. We generalize the concept of proportional lumpability, which extends the well-known notion of lumpability and allows for the exact computation of stationary performance indices, in contrast to quasi-lumpability, which only provides bounds. Proportional lumpability is achieved through a perturbation of the original Markov chain’s transition rates, guided by a proportionality function. We further explore the idea of perturbing Markov chains through left and right multiplications by square matrices, introducing the concepts of left and right-perturbed Markov chains, which preserve the original model’s topology. For left-perturbed Markov chains, the steady-state distribution of the original chain can be derived by multiplying the probability vector of the perturbed chain by the square matrix used to define the perturbation. In contrast, for right-perturbed Markov chains, the steady-state probability distribution remains unchanged.
2026
9783032068170
9783032068187
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1324686
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact