LD4017 Database Design and Implementation Assignment Sample

Ace Your LD4017 Database Design and Implementation Assignment with Expert Guidance from Rapid Assignment Help – Covering DBMS Fundamentals, ERD Creation, Normalisation, Schema Design, SQL Stored Procedures, and Ethical Considerations.

  •  
  •  
  •  
  • Type Assignment
  • Downloads606
  • Pages13
  • Words3165

Section 1: Database Fundamentals

Section 1 of the LD4017 Database Design and Implementation Assignment focuses on the fundamentals of databases, highlighting the importance of Database Management Systems (DBMS) in modern businesses. It explains how DBMS ensures data consistency, security, and efficient querying for decision-making. The section also explores the advantages of relational databases, data modelling, schema design, and the role of normalisation up to advanced normal forms. Additionally, it discusses the key components of database system architecture, including storage, query processing, and transaction management. Students seeking Online Assignment Help can gain valuable insights into how these concepts form the backbone of robust database solutions.

Importance of Database Management Systems in Modern Businesses

Database Management Systems (DBMS) are no longer options but necessities for today’s organisations because they are the underlying backbone for data processing and analysis. They provide for the warehousing of data and information assets so that standardisation can be achieved and a lot of data need not be repeated (Taipalus, 2020). These have to be centralised to ensure data consistency and to get get rid of the problem of having different information echo from different and largely independent sources. Moreover, DBMS offer such security aspects as user login, authorisation and privileges, as well as encryption, which became the most important to secure personal data and to meet the standards and requirements of data protection legislation. This is bad since the database level is the only level that can make a record invalid or valid while stored, thus providing satisfaction in ensuring that the information stored is proper.

LD4017 Database Design and Implementation Assignment Sample
Liked This Sample? Hire Me Now
Odis Gilmore
Odis Gilmore 5 reviews 9 Years | MSc

Moreover, DBMS provides maximum querying capabilities so that a great amount of data can be retrieved and analysed in a minimal amount of time. Such a high-speed analysis of such and similar large amounts of data is necessary to maintain competitive advantages in the kinase industries and other exceptionally evolving markets (Melina et al. 2020). In this way, DBMS contribute to the ability of businesses to appropriately control and process their data to make required decisions andrespond to the unanticipated changing conditions in the market.

Advantages of Relational Databases

Relational databases are now widely widely availablefor the storage of information. Since relational databases are table-based, there is a lot of flexibility where the structure of the database is concerned as the evolution of the business happens (Karwin, 2022). It can be argued that this kind of flexibility is particularly useful in the present uncertainunpredictable period for businesses. MySQL is well-suited for the structure of data since structures are obtained by normalisation features, which make data be in the form of tables and do not allow duplication. This makes it easy to guard data against what may probably happen in the process of data entry and also helps in seeing that the data are standardised in the system.

LAN is a favourablee environment for constructing and supporting more elaborate queries using the Structured Query Language (SQL) standard tool used for different operations on databases to obtain, modify or analyse the stored information. It also makes ‘interoperability’ easier with other software programs as well as reporting tools. Relational database systems are very sensitive to data integrity issues, such as the primary keys and the foreign keys and constraints, which are of much importance in maintaining the integrity of the data as it is altered (Zhang and Pan, 2022). Besides this, they have a good foundation in concurrency control and the transaction processing that is so important in multi-userr systems and vital for making certain that the data is accurate even as other users are concurrently accessing and manipulating it.

Data Modelling, Schema, and Normalisation

The entity relationshipship model is one of the fundamentals of developing the structure of the database, and this method is focused on constructing the conceptual model of the data. Data modelling can also be described as a map in terms of how data should be arranged in order to mimic real-life entities and processes of the organisation. In the case of using in database design, the above model provides the rigid and unambiguous definition of the database and of the tables, fields, their relations and constraints in particular. It works as a referencing model that would inform how data is set for use, governed for admissible quality and made accessible for easy retrieval. Perhaps the single most important aspect of functional dependencies emerging from the normalisation in relational databases is that the data is well arranged, or the fact that there is no added data being present in the system. This process is of the type of subdividing a big table into several small ones and the specification of the connections between them.

Normalisation is aimed at pursuing goals like decreasing the data redundancy and the possibilities of data anomalies andincreasing the data consistency. Some of the normalisation techniques are used to minimise and avoid repetition of some elements within the constructed databases andso make them more efficient (Al-Aqbi et al. 2021). However, ashas been pointed out earlier, normalisation should be in correspondence with performance requirements because, as a rule, the higher the level of normalisation is, the more complex the queries are,, and, as a result, the system may haveworse performance.

Feeling overwhelmed by your assignment?

Get assistance from our PROFESSIONAL ASSIGNMENT WRITERS to receive 100% assured AI-free and high-quality documents on time, ensuring an A+ grade in all subjects.

Normal Forms

Normal forms are upward degrees of database normalisation,, each of which adds further restrictions to the database structure. The First Normal Form (1NF) defines that no column of a relation can have composite values with the values of other columns and removes all repeating groups of columns. Second Normal Form (2NF) is the second aspect of 1NF andremoves instances of partial dependencies, where all non-key attributes in a table must be functionally dependent on the primary key. The third level of normalisation is Third Normal Form,, or 3NF, whichch goes one step further thanan the Third Normal Form (3NF) in eliminating transitive dependencies, where non-key attributes are not dependent on the other non-key attributes.

BCNF is the other normal form that is stronger than 3NF, which deals with some specific types of anomalies that have not been covered in 3NF. Normal forms such as 4NF and 5NF are applied in multi-valued dependency as well as in joint dependency. Despite their sometimes extraordinary capabilities to compile big amounts of data,, they introduce several conditions normalising their performance; their usage may embellish query structures and impact the program (Jaleel and Abbas, 2020). Thus, for the actual choice of the degree of normalisation, the database designers consider the merits of normalisation against the needs of the performance of the database.

Key Components of a Database System Architecture

Storage: Storage is one of the primary functions of a database system,, as this is whereata is put on disc or other storage media. The storage component enables the storage of data in such a way that will enable efficient,, easy access and manipulation. Database files are the basic structures of the database that allow storing of tables and indexes as well as other objects. These files are stored on disc and maintained by the DBMS in a way which attempts to optimise their storage and retrieval (Srinivasan ett al. 2023). Indexes are data structures that help in enhancing the speed of data retrieval operations.. With indexes on certain fields, the DBMS can easily find which actual records correspond to a certain set of keys,, eliminating the need to scan the entire table. In a database the information is stored most commonly in pages or blocks,, where a block or a page is the smallest unit of data that the DBMS writes into disc or that it reads from disc. This is due to the fact that organisation of data into pages helps in improving disc I/O operations.

Query Processing: Query processing is part of the DBMS that carries out the execution of an SQL query and provides the result in the form needed. It has several subparts,, such as a parser,r, an optimiserser and an executor.or. The queryuery parser is another component of the system which performs the syntactical analysis of the SQL SQL query entered by the user (Abbas andAhmad, 2020). If the parser can parse the query,, the system constructs a parse tree that describes the logical form of queries. Specifically, the job of the query optimiser is to decide how to best execute the query to the database. It evaluates various plans of executing the commands and selects a plan that will take the leasteast time and necessitate the least use of system resources. These include indexes, join techniques and distribution of data,, among other aspects which the optimiser takes into account. The query executor performs the evaluation plan which has been developed by the optimiser. It draws the required data from the storage space, filters and joins it it to the results required and gets it back to the user.

Transaction Management: Transaction management guarantees that all the database transactions are done correctly and are independent of other parallel transactions. It comprises other subcomponents such as the transaction manager, concurrency controller, and the recovery controller. Part of the responsibility of a transaction manager is to supervise the execution of transactions in order to adhere to the ACID characteristics (Zhou et etl. 2022). Concurrency control mechanisms aim at regulating the parallel processing of operations in the system by several users. These mechanisms control conflict situations and guarantee that some transactions do not influence other transactions,, keeping the database consistent. The recovery manager,n the other hand, is, is tasked with the responsibility of trying to make the database as consistent as possible in case of a breakdown.

Section 2: Database Design

Analysis of Business Requirements and ERD Creation

Entities and Attributes

  • Products

ProductID

Name

Description

Price

StockLevel

  • Customers

CustomerID

Name

Email

Address

  • Orders

OrderID

OrderDate

CustomerID

TotalAmount

  • OrderItems

OrderItemID

OrderID

ProductID

Quantity

  • Suppliers

SupplierID

Name

ContactInfo

  • Inventory

InventoryID

ProductID

SupplierID

Quantity

Relationships

Customers to Orders: One-to-Many (A single customer can place multiple orders).

Orders to Order Items: One-to-Many (Each order can contain multiple items).

OrderItems to Products: Many-to-One (Multiple order items can refer to the same product).

Products to Suppliers: Many-to-Many (A product can be supplied by multiple suppliers,, and a supplier can supply multiple products).

Products to Inventory: One-to-One (A product stock can be tracked by inventory).

ERD

Figure 1: ERD

(Source: Self-created using draw)

The Products entity entails specific information concerning each product,, and they include ProductID, which is unique for every product,, as well as the name,ame, description, price and stock levelhe product. The Customers entity takes records of customers with features such as CustomerID, Name, E-mail, and Address.

This entity,, called ERS, keeps records of orders made by the customers. OrderID, OrderDate, CustomerID and Total Amount are contained in it. OrderItems is the entity for the appearance of the included items by each order, and its attributes consist of OrderItemID, OrderID, ProductID, and and Quantity.

The suppliers entity holds the information on the suppliers with attributes SupplierID, Name and ContactInfo. Last, the Inventory entity has a set of attributes that are about the management of the stocks from different suppliers,, and these are the InventoryID, ProductID, SupplierID and Quantity.

The best thing here is to understand the relationships of these entities. Users make orders,, and each order comprises several order items.. This means that each OrderItem is associated with a specific Product. There are also suppliers from which products are purchased, and the inventory entity has information on the amount of every product offered by different suppliers.

Normalisation to 3rd Normal Form (3NF)

Normalisation is one of the strategies that involves placing data into a form that is simplest and least redundant. The idea is that no unwanted anomalies can exist by table, while relational theory aims at this goal by table, at the same time using extensional definitions of tables.

1NF says that each table must have at least one field as a primary key,, and all fields in the table must be simple or indivisible. In this case, the schema is already in 1NF. Every table possesses one or more attributes which act as the key field of the table. All the attributes of the tables are atomic.

Transition to the 2nd Normal Form means the removal of partial dependencies is needed. This means that all other attributes which are not included as part of the primary key should have dependencies on all the components of the primary key. For instance, in the Orders table, TotalAmount has a relation with any other non-key attribute of the same table, as it is dependent on OrderID only. Likewise, in the OrderItems table, all the attributes are dependent and are based on the key attribute, that is, OrderItemID. In the same manner, the Inventory table complies with 2NF since Quantity, ProductID, and SupplierID are all dependent on InventoryID.

3NF specifies that a relationship should not exist where non-key attributes depend on other non-key attributes. Similarly, in the Products table, other attributes like Name, Description, Price and StockLevel directly depend on ProductID,, which is the key, and do not depend on other non-key attributes. The same can be said about the relationships between the OrderItems and Inventory tables.

Schema Design for TechGoods Database

Database Schema

Figure 2: Database Schema

(Source: Self-created using MySQL)

Design Justification

The schema design follows norms of normalisation so that it achieves high efficiency and has accurate data. Primary keys define records,, while foreign keys establish links between tables. Indexing is carried out out by default on primary keys so as to improve the query response time. This sound structure is suitable for managing products, customers, orders and inventory and can alleviate the structure to support TechGood’s future development and functional operational strategies.

Section 3: Database Implementation and Ethical Considerations

DBMS Selection and Justification

To support the implementation of the TechGoodsoods database, MySQL will be adopted as an ideal Database Management System (DBMS). The following critical factors justify this choice: data security concerns, scalability features and compatibility with the current context of the company’s IT assets. MySQL is well recognised as the platform with good security, and it is indispensable to provide security for the customer and transactions. It provides reliable means for user authentication, a rich set of access permissions, and and backup and encryption of user data both in storage and in motion. All these security capabilities are important when it comes to safeguarding personsand financial data against intruders and misuse. Another factor that makes MySQL fit for TechGoods Company is scalability. With increased transaction and data volumes in the actual business environment, scaling is becoming important for MySQL. It came with features such as replication and clustering, which enablethe horizontal scaling of the database when the demands for it increaseand may result in poor performance. MySQL is chosen based on ethical and legal right aspects that are helpful while doing analysis. Moreover, legal compliance with regulations such as GDPR and CCPA is important (Vatjalainen, A., 2023). MySQL complies with these standards to guarantee that TechGoods handless customers’ records understandably and in ways that reflect the law. This entails getting prior consent of users for data collection and processing as well as giving the users means to interact with the collected data.

Ethical Considerations

Another ethical issue in database management is the accuracy of the information which is contained in the database. Data input and data maintenance involve accuracy, which is crucial in any respect to the database. Some of the constraints include unique keys and foreign keys which allow checking of data to ensure only correct data is input into the database (Atzeni, P., et al. 2020). This helps to reduce the chances of data errors because data in one place can be inconsistent or replicated in another place,, leading to operational problems or wrong reports.

Both issues are critical to ensuring patients’ trust and meeting the applicable legal standards regarding the use of their confidentiality. Due to the nature of users’ data collected, used, and stored at TechGoods, the company should ensure that users are informed of the same. The latter ensures that there are ways to record data management processes and capture user consent as part of the created database design. It enhances the business’s credibility with its clientele as well as working towards fulfilling the legal requirements.

Stored Procedures and Their Implications

Two stored procedures increase the availability of stimulating interaction with the TechGoods database.

The first procedure, ReplenishStock, replenishes products once the quantity within the warehouse decreases below a particular limit. This procedure thus searches the Products table for stock that is almost depleted and updates the Inventory table. Automating this particular task helps TechGoods to minimise the role played by employees by minimising incidents where there is no stock, due to constant checks on the storage of the inventory.

The second procedure is d CustomerOrderSummary,, which returns more detailed information about orders of a particular customer. It sums up the spending of a customer in each order, with the results pre-aggregated in terms of order ID (Hajek andAbedin, 2020). This procedure benefits the customer and the company by providing the customer with a clear view of the order history and their spending, hence possibly increasing the satisfaction level.

From the social and sustainable point of view, these stored procedures are beneficial to the operation of the business since the efficient utilisation of resources is promoted.

Reference List

Journals

  • Abbas, A. and Ahmad, K., 2020. Query Performance in Database Operation. PS-FTSM-2020-045.

  • Al-Aqbi, A.T.Q., Al-Taie, R.R.K. and Ibrahim, S.K., 2021. Design and Implementation of Online Examination System Based on MSVS and SQL for University Students in Iraq. Webology, 18(1).

  • Atzeni, P., Bugiotti, F., Cabibbo, L. and Torlone, R., 2020. Data modellingg in the NoSQL world. Computer Standards & Interfaces, 67, p.103149.

  • Batra, P., Goel, N., Sangwan, S. and Dixit, H., 2020. Design and implementation of a hostelel management system using Java and MySQL. LC International Journal of STEM (ISSN: 2708-7123), 1(4), pp. pp. 63-74.

  • Hajek, P. and Abedin, M.Z., 2020. A profit function-maximising inventory backorder prediction system using big data analytics. IEEE Access, 8, pp. 58982-58994..

  • Hamidi, A., Hamraz, A.R. and Rahmani, K., 2022. Database Security Mechanisms in MySQL. Afghanistan Research Journal, 4(1).

  • Jaleel, R.A. and Abbas, T.M., 2020, June. Design and implementation of an efficientent decision support system using data mart architecture. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1-6). IEEE.

  • Karwin, B., 2022. SQL Antipatterns, Volume 1. Pragmatic Bookshelf.

  • Maesaroh, S., Gunawan, H., Lestari, A., Tsaurie, M.S.A. and Fauji, M., 2022. Query optimisation in the MySQLySQL database using an index.ex. International Journal of Cyber and IT Service Management, 2(2), pp.104-110.

  • Melina, P.E., Witanti, W. and Sukrido, K.V., 2020. Design and implementation of a multi-knowledge expert system using the SQL inference mechanism for herbal medicine. In Journal of Physics: Conference Series (Vol. 1477, No. 2, pp. 1-9).

  • Samidi, S., Suladi, R.Y. and Lesmana, A.B., 2022. Implementation of Database Distributed Sharding Horizontal Partition in MySQL. Case Study of Application of Food Serving at Keates. JURNAL SISFOTEK GLOBAL, 12(1), pp. 50-57.

  • Srinivasan, V., Gooding, A., Sayyaparaju, S., Lopatic, T., Porter, K., Shinde, A. and Narendran, B., 2023. Techniques and Efficiencies from Building a Real-Time DBMS. Proceedings of the VLDB Endowment, 16(12), pp.3676-3688.

  • Sudiartha, I.K.G., Indrayana, I.N.E., Suasnawa, I.W., Asri, S.A. and Sunu, P.W., 2020, July. Data structure comparison between MySQL and Firebase NoSQL databases on a mobile-based tracking application. In Journal of Physics: Conference Series (Vol. 1569, No. 3, p. 032092). IOP Publishing.

  • Suster, I. and Ranisavljevic, T., 2023. Optimisation of SQL database. Journal of Process Management, New Technologies 1-2, pp. 141-151.

  • Taipalus, T., 2020. The effects of database complexity on SQL query formulation. Journal of Systems and Software, 165, p.110576.

  • Vatjalainen, A., 2023. SQL versus NoSQL: a comparison case of MySQL versus MongoDB.

  • Zhang, Y. and Pan, F., 2022. Design and implementation of a new intelligent warehouse management system based on MySQL database technology. Informatica, 46(3).

  • Zhou, X., Yu, X., Graefe, G. and Stonebraker, M., 2022. Lotus: scalable multi-partition transactions on single-threaded partitioned databases. Proceedings of the VLDB Endowment, 15(11), pp. 2939-2952.

Recently Downloaded Samples by Customers

Marketing Plan For Thermo Fisher Of Establish New Business Assignment Sample

Introduction: Thermo Fisher’s Business Growth in Saudi Arabia Thermo Fisher that is an American seller of systematic tool,...View and Download

The Role of Ethics, Communication & Leadership Assignment Sample

Introduction Get Free Online Assignment Samples from UK's Best Assignment Helper Experts to boost your academic...View and Download

Cultural and Environmental Influences on Schizophrenia: A PICO-Based Sample Assignment

Introduction With the support of Affordable Online Assignment Help Experts, gain in-depth insights into how cultural and...View and Download

Secure Systems with Models & Integration Tests Assignment Sample

Task:1 Secure Systems with Models & Integration Tests Struggling to write on cybersecurity models? Get high-quality Online...View and Download

Business Management Assignment Sample

Introduction Get free samples written by our Top-Notch subject experts for taking online Assignment...View and Download

Optimizing Business Operations with a Modern DBMS Assignment Sample

Introduction Get Free Online Assignment Samples from UK's Best Assignment Help Experts to boost your academic...View and Download

scan QR code from mobile
Scan QR Code From Mobile
Get best price for your work
  • 15698+ Projects Delivered
  • 500+ Experts 24*7 Online Help

offer valid for limited time only*