A Deep Reinforcement Learning-Based Caching Strategy for Internet of Things Networks with Transient Data

dc.contributor.advisorWang, Ping
dc.contributor.authorNasehzadeh, Ali
dc.date.accessioned2021-11-15T15:16:51Z
dc.date.available2021-11-15T15:16:51Z
dc.date.copyright2021-05
dc.date.issued2021-11-15
dc.date.updated2021-11-15T15:16:51Z
dc.degree.disciplineElectrical and Computer Engineering
dc.degree.levelMaster's
dc.degree.nameMASc - Master of Applied Science
dc.description.abstractInternet of Things (IoT) has been on a continuous rise in the past few years, and its potentials are now more apparent. Transient data generation and limited energy resources are the major bottlenecks of these networks. Besides, conventional quality of service measurements are still valid requirements that should be met, such as throughput, jitter, error rate, and delay or latency. An efficient caching policy can help meet the standard quality of service requirements while bypassing the specific limitations of IoT networks. Adopting deep reinforcement learning (DRL) algorithms enables us to develop an effective caching scheme without the need for any prior knowledge or contextual information such as popularity distributions or the lifetime of files. In this thesis, we propose DRL-based caching schemes that improve the cache hit rate and energy consumption of IoT networks compared to some well-known conventional caching schemes. We also propose a hierarchical caching architecture that enables parent nodes to receive the requests from multiple edge nodes and make caching decisions independently. The results of comprehensive experiments show that our proposed method outperforms the well-known conventional caching policies and an existing DRL-based method in cache hit and energy consumption rates by considerable margins. In the fifth chapter, we seek machine learning-based caching solutions for the cases in which the file popularity distribution is not static but changing over time. Taking the same system model into consideration, we propose a concept drift detection method based on clustering and cluster similarity measurements. The drift detection can trigger a process that leads the DRL agent to adapt to the new popularity distribution; we have based this process on transfer learning techniques. Transfer learning can help us leverage the existing knowledge in trained models and significantly speed up the training phase of our machine learning models.
dc.identifier.urihttp://hdl.handle.net/10315/38647
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectComputer engineering
dc.subject.keywordsDeep Reinforcement Learning
dc.subject.keywordsCaching
dc.subject.keywordsInternet of Things
dc.subject.keywordsComputer networks
dc.subject.keywordsTransfer learning
dc.subject.keywordsConcept drift detection
dc.titleA Deep Reinforcement Learning-Based Caching Strategy for Internet of Things Networks with Transient Data
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Nasehzadeh_Ali_2021_Masters.pdf
Size:
1.29 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.87 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.39 KB
Format:
Plain Text
Description: