Literature Database Entry

sahin2022scheduling


Taylan Şahin, Ramin Khalili, Mate Boban and Adam Wolisz, "Scheduling Out-of-Coverage Vehicular Communications Using Reinforcement Learning," IEEE Transactions on Vehicular Technology, vol. 71 (10), pp. 11103–11119, October 2022.


Abstract

Performance of vehicle-to-vehicle (V2V) communications depends highly on the employed scheduling approach. While centralized network schedulers offer high V2V communication reliability, their operation is conventionally restricted to areas with full cellular network coverage. In contrast, in out-of-cellular-coverage areas, comparatively inefficient distributed radio resource management is used. To exploit the benefits of the centralized approach for enhancing the reliability of V2V communications on roads lacking cellular coverage, we propose VRLS (Vehicular Reinforcement Learning Scheduler), a centralized scheduler that proactively assigns resources for out-of-coverage V2V communications before vehicles leave the cellular network coverage. By training in simulated vehicular environments, VRLS can learn a scheduling policy that is robust and adaptable to environmental changes, thus eliminating the need for targeted (re-)training in complex real-life environments. We evaluate the performance of VRLS under varying mobility, network load, wireless channel, and resource configurations. VRLS outperforms the state-of-the-art distributed scheduling algorithm in zones without cellular network coverage by reducing the packet error rate by half in highly loaded conditions and achieving near-maximum reliability in low-load scenarios.

Quick access

Original Version DOI (at publishers web site)
Authors' Version PDF (PDF on this web site)
BibTeX BibTeX

Contact

Taylan Şahin
Ramin Khalili
Mate Boban
Adam Wolisz

BibTeX reference

@article{sahin2022scheduling,
    author = {{\c{S}}ahin, Taylan and Khalili, Ramin and Boban, Mate and Wolisz, Adam},
    doi = {10.1109/tvt.2022.3186910},
    title = {{Scheduling Out-of-Coverage Vehicular Communications Using Reinforcement Learning}},
    pages = {11103--11119},
    journal = {IEEE Transactions on Vehicular Technology},
    issn = {1939-9359},
    publisher = {IEEE},
    month = {10},
    number = {10},
    volume = {71},
    year = {2022},
   }
   
   

Copyright notice

Links to final or draft versions of papers are presented here to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or distributed for commercial purposes without the explicit permission of the copyright holder.

The following applies to all papers listed above that have IEEE copyrights: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

The following applies to all papers listed above that are in submission to IEEE conference/workshop proceedings or journals: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.

The following applies to all papers listed above that have ACM copyrights: ACM COPYRIGHT NOTICE. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept., ACM, Inc., fax +1 (212) 869-0481, or permissions@acm.org.

The following applies to all SpringerLink papers listed above that have Springer Science+Business Media copyrights: The original publication is available at www.springerlink.com.

This page was automatically generated using BibDB and bib2web.