Crop disease diagnosis is crucial for effective treatment and a pressing concern in agriculture. Identifying precise disease grades is vital, as treatments vary. We’ve developed an Image Processing and deep learning system, MDFC–ResNet, which detects and diagnoses crop diseases accurately across species, coarse-grained, and fine-grained levels. This innovative system outperforms other deep learning models in real-world agricultural applications.The HDFC–ResNet neural network has better recognition effect and is more instructive in actual agricultural production activities than other popular deep learning models.
Cloud services, vital in private, public, and commercial sectors, demand unwavering security and resilience. This paper introduces online cloud anomaly detection, emphasizing one-class SVMs at the hypervisor level, showcasing high detection accuracy exceeding 90% against malware and DoS attacks, while highlighting the importance of system and network data in versatile detection. This approach, involving dedicated monitoring components per VM, adapts adeptly to cloud scenarios, even with unknown malware strains.
In the era of social networking and real-time communication, the vast amount of textual data generated on comment services presents a unique challenge and opportunity for information retrieval and consumption. “IncreSTS Text Summarization” represents an innovative approach to real-time text summarization in the context of social networks and comment services. This research explores methods and techniques to extract key information, trends, and sentiments from the dynamic and rapidly evolving landscape of comments and discussions on social media platforms. By harnessing the power of text summarization, this work aims to enable users to efficiently digest and engage with the wealth of user-generated content on these platforms. The “IncreSTS Text Summarization” project offers a promising avenue for enhancing the accessibility and utility of real-time commentary in the digital age.
Computers are frequently tools for various crimes, including hacking, drug trafficking, and child pornography. Over 75% of criminals store their plans on their PCs. When caught, investigators seek evidence on these machines. The rise in computer-related crimes has led to a demand for specialized forensic tools, streamlining evidence search and making the process more efficient than manual searches. Inspired by the SQ’s forensic process, a novel subject-based semantic document clustering model for investigators to group documents on a suspect’s computer into overlapping clusters. These clusters are based on subjects defined by the investigator. While numerous forensic tools exist, system distinguishes itself through its unique approach.
Enhancing data security in outsourced, distributed, and utility-based environments is crucial for building trust. To maintain data confidentiality, users’ trust in the system remains intact, with uninterrupted availability. Frequent data access necessitates user verification to prevent security breaches. Adding an extra layer of security, users provide a question-and-answer token, approved during file access, without disrupting their activities. Tracking file access patterns is crucial; any changes in access time lead to data failure. Unauthorized access is deterred by providing encrypted false data.
A new context-based model (CoBAn) for accidental and intentional data leakage prevention (DLP) is proposed. Existing methods attempt to prevent data leakage by either looking for specific keywords and phrases or by using various statistical methods. Keyword-based methods are not sufficiently accurate since they ignore the context of the keyword, while statistical methods ignore the content of the analyzed text. During the training phase, clusters of documents are generated and a graph representation of the confidential content of each cluster is created. During the detection phase, each tested document is assigned to several clusters and its contents are then matched to each cluster’s respective graph in an attempt to determine the confidentiality of the document. Extensive experiments have shown that the model is superior to other methods in detecting leakage attempts, where the confidential information is rephrased or is different from the original examples provided in the learning set.
This project addresses the growing reliance on websites for various real-life applications and the inherent security vulnerabilities present in dynamic web applications developed using the ASP.NET framework. With the widespread use of ASP.NET for its language support and powerful features, such as event-driven programming and rich server controls, the need for secure development practices becomes paramount. The paper introduces an algorithm aimed at enhancing website security by detecting vulnerabilities. The algorithm, comprising multiple steps, has already seen partial implementation, with ongoing work to complete the remaining stages. To mitigate security risks, the project emphasizes the importance of vigilant developers and website owners, advocating for security integration from the outset. The project also describes a development tool designed to uncover vulnerabilities in website source code, focusing on the three-tiered web application model: presentation, application (ASP.NET), and storage. The tool scans for potential vulnerabilities, generates a comprehensive report listing identified issues, and highlights their locations within the code.
AI for Combating COVID-19: This project explores the application of Artificial Intelligence (AI) and Deep Learning methods, including Generative Adversarial Networks (GANs), Extreme Learning Machine (ELM), and Long/Short Term Memory (LSTM), to address the global COVID-19 crisis. It proposes an integrated bioinformatics approach that harnesses structured and unstructured data to create user-friendly platforms for medical professionals and researchers, with a primary focus on expediting diagnosis and treatment processes. The research leverages the latest medical publications and reports to optimize the Artificial Neural Network-based tools, employing a variety of data inputs, including clinical information and medical imaging, to enhance practical outcomes.