Show simple item record

dc.contributor.authorPerfilieva, Irina
dc.contributor.authorMadrid Labrador, Nicolás Miguel 
dc.contributor.authorOjeda Aciego, Manuel
dc.contributor.authorArtiemjew, Piotr
dc.contributor.authorNiemczynowicz, Agnieszka
dc.contributor.otherMatemáticases_ES
dc.date.accessioned2026-01-23T08:12:04Z
dc.date.available2026-01-23T08:12:04Z
dc.date.issued2025-03-25
dc.identifier.issn0925-2312
dc.identifier.urihttp://hdl.handle.net/10498/38430
dc.description.abstractDespite several successful applications of the Extreme Learning Machine (ELM) as a new neural network training method that combines random selection with deterministic computation, we show that some fundamental principles of ELM lack a rigorous mathematical basis. In particular, we refute the proofs of two fundamental claims and construct datasets that serve as counterexamples to the ELM algorithm. Finally, we provide alternative claims to the basic principles that justify the effectiveness of ELM in some theoretical cases.es_ES
dc.formatapplication/pdfes_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.sourceNeurocomputing 621: 129298 (2025)es_ES
dc.subjectExtreme Learning Machinees_ES
dc.subjectFeed-forward neural networkes_ES
dc.subjectGeneralized inverse Moore-Penrose matrixes_ES
dc.subjectPseudo-inverse matrixes_ES
dc.titleA critical analysis of the theoretical framework of the Extreme Learning Machinees_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/J.NEUCOM.2024.129298
dc.relation.projectIDPID2022-140630NB-I00es_ES
dc.type.hasVersionSMURes_ES


Files in this item

This item appears in the following Collection(s)

Show simple item record