Source Code Cross Referenced for LoggingBufferManager.java in  » Database-DBMS » mckoi » com » mckoi » store » Java Source Code / Java DocumentationJava Source Code and Java Documentation

Java Source Code / Java Documentation
1. 6.0 JDK Core
2. 6.0 JDK Modules
3. 6.0 JDK Modules com.sun
4. 6.0 JDK Modules com.sun.java
5. 6.0 JDK Modules sun
6. 6.0 JDK Platform
7. Ajax
8. Apache Harmony Java SE
9. Aspect oriented
10. Authentication Authorization
11. Blogger System
12. Build
13. Byte Code
14. Cache
15. Chart
16. Chat
17. Code Analyzer
18. Collaboration
19. Content Management System
20. Database Client
21. Database DBMS
22. Database JDBC Connection Pool
23. Database ORM
24. Development
25. EJB Server geronimo
26. EJB Server GlassFish
27. EJB Server JBoss 4.2.1
28. EJB Server resin 3.1.5
29. ERP CRM Financial
30. ESB
31. Forum
32. GIS
33. Graphic Library
34. Groupware
35. HTML Parser
36. IDE
37. IDE Eclipse
38. IDE Netbeans
39. Installer
40. Internationalization Localization
41. Inversion of Control
42. Issue Tracking
43. J2EE
44. JBoss
45. JMS
46. JMX
47. Library
48. Mail Clients
49. Net
50. Parser
51. PDF
52. Portal
53. Profiler
54. Project Management
55. Report
56. RSS RDF
57. Rule Engine
58. Science
59. Scripting
60. Search Engine
61. Security
62. Sevlet Container
63. Source Control
64. Swing Library
65. Template Engine
66. Test Coverage
67. Testing
68. UML
69. Web Crawler
70. Web Framework
71. Web Mail
72. Web Server
73. Web Services
74. Web Services apache cxf 2.0.1
75. Web Services AXIS2
76. Wiki Engine
77. Workflow Engines
78. XML
79. XML UI
Java
Java Tutorial
Java Open Source
Jar File Download
Java Articles
Java Products
Java by API
Photoshop Tutorials
Maya Tutorials
Flash Tutorials
3ds-Max Tutorials
Illustrator Tutorials
GIMP Tutorials
C# / C Sharp
C# / CSharp Tutorial
C# / CSharp Open Source
ASP.Net
ASP.NET Tutorial
JavaScript DHTML
JavaScript Tutorial
JavaScript Reference
HTML / CSS
HTML CSS Reference
C / ANSI-C
C Tutorial
C++
C++ Tutorial
Ruby
PHP
Python
Python Tutorial
Python Open Source
SQL Server / T-SQL
SQL Server / T-SQL Tutorial
Oracle PL / SQL
Oracle PL/SQL Tutorial
PostgreSQL
SQL / MySQL
MySQL Tutorial
VB.Net
VB.Net Tutorial
Flash / Flex / ActionScript
VBA / Excel / Access / Word
XML
XML Tutorial
Microsoft Office PowerPoint 2007 Tutorial
Microsoft Office Excel 2007 Tutorial
Microsoft Office Word 2007 Tutorial
Java Source Code / Java Documentation » Database DBMS » mckoi » com.mckoi.store 
Source Cross Referenced  Class Diagram Java Document (Java Doc) 


001:        /**
002:         * com.mckoi.store.LoggingBufferManager  10 Jun 2003
003:         *
004:         * Mckoi SQL Database ( http://www.mckoi.com/database )
005:         * Copyright (C) 2000, 2001, 2002  Diehl and Associates, Inc.
006:         *
007:         * This program is free software; you can redistribute it and/or
008:         * modify it under the terms of the GNU General Public License
009:         * Version 2 as published by the Free Software Foundation.
010:         *
011:         * This program is distributed in the hope that it will be useful,
012:         * but WITHOUT ANY WARRANTY; without even the implied warranty of
013:         * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
014:         * GNU General Public License Version 2 for more details.
015:         *
016:         * You should have received a copy of the GNU General Public License
017:         * Version 2 along with this program; if not, write to the Free Software
018:         * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA  02111-1307, USA.
019:         *
020:         * Change Log:
021:         * 
022:         * 
023:         */package com.mckoi.store;
024:
025:        import java.util.ArrayList;
026:        import java.util.Comparator;
027:        import java.util.Arrays;
028:        import java.io.IOException;
029:        import java.io.File;
030:        import com.mckoi.debug.DebugLogger;
031:        import com.mckoi.debug.Lvl;
032:
033:        /**
034:         * A paged random access buffer manager that caches access between a Store and
035:         * the underlying filesystem and that also handles check point logging and
036:         * crash recovery (via a JournalledSystem object).
037:         *
038:         * @author Tobias Downer
039:         */
040:
041:        public class LoggingBufferManager {
042:
043:            /**
044:             * Set to true for extra assertions.
045:             */
046:            private static boolean PARANOID_CHECKS = false;
047:
048:            /**
049:             * A timer that represents the T value in buffer pages.
050:             */
051:            private long current_T;
052:
053:            /**
054:             * The number of pages in this buffer.
055:             */
056:            private int current_page_count;
057:
058:            /**
059:             * The list of all pages.
060:             */
061:            private ArrayList page_list;
062:
063:            /**
064:             * A lock used when accessing the current_T, page_list and current_page_count
065:             * members.
066:             */
067:            private final Object T_lock = new Object();
068:
069:            /**
070:             * A hash map of all pages currently in memory keyed by store_id and page
071:             * number.
072:             * NOTE: This MUST be final for the 'fetchPage' method to be safe.
073:             */
074:            private final BMPage[] page_map;
075:
076:            /**
077:             * A unique id key counter for all stores using this buffer manager.
078:             */
079:            private int unique_id_seq;
080:
081:            /**
082:             * The JournalledSystem object that handles journalling of all data.
083:             */
084:            private JournalledSystem journalled_system;
085:
086:            /**
087:             * The maximum number of pages that should be kept in memory before pages
088:             * are written out to disk.
089:             */
090:            private final int max_pages;
091:
092:            /**
093:             * The size of each page.
094:             */
095:            private final int page_size;
096:
097:            // ---------- Write locks ----------
098:
099:            /**
100:             * Set to true when a 'setCheckPoint' is in progress.
101:             */
102:            private boolean check_point_in_progress;
103:
104:            /**
105:             * The number of write locks currently on the buffer.  Any number of write
106:             * locks can be obtained, however a 'setCheckpoint' can only be achieved
107:             * when there are no write operations in progress.
108:             */
109:            private int write_lock_count;
110:
111:            /**
112:             * A mutex for when modifying the write lock information.
113:             */
114:            private final Object write_lock = new Object();
115:
116:            //  /**
117:            //   * The number of cache hits.
118:            //   */
119:            //  private long cache_hit_count;
120:            //
121:            //  /**
122:            //   * The number of cache misses.
123:            //   */
124:            //  private long cache_miss_count;
125:
126:            /**
127:             * Constructs the manager.
128:             */
129:            public LoggingBufferManager(File journal_path, boolean read_only,
130:                    int max_pages, int page_size,
131:                    StoreDataAccessorFactory sda_factory, DebugLogger debug,
132:                    boolean enable_logging) {
133:                this .max_pages = max_pages;
134:                this .page_size = page_size;
135:
136:                check_point_in_progress = false;
137:                write_lock_count = 0;
138:
139:                current_T = 0;
140:                page_list = new ArrayList();
141:                page_map = new BMPage[257];
142:                unique_id_seq = 0;
143:
144:                journalled_system = new JournalledSystem(journal_path,
145:                        read_only, page_size, sda_factory, debug,
146:                        enable_logging);
147:            }
148:
149:            /**
150:             * Constructs the manager with a scattering store implementation that
151:             * converts the resource to a file in the given path.
152:             */
153:            public LoggingBufferManager(final File resource_path,
154:                    final File journal_path, final boolean read_only,
155:                    final int max_pages, final int page_size,
156:                    final String file_ext, final long max_slice_size,
157:                    DebugLogger debug, boolean enable_logging) {
158:                this (journal_path, read_only, max_pages, page_size,
159:                        new StoreDataAccessorFactory() {
160:                            public StoreDataAccessor createStoreDataAccessor(
161:                                    String resource_name) {
162:                                return new ScatteringStoreDataAccessor(
163:                                        resource_path, resource_name, file_ext,
164:                                        max_slice_size);
165:                            }
166:                        }, debug, enable_logging);
167:            }
168:
169:            /**
170:             * Starts the buffer manager.
171:             */
172:            public void start() throws IOException {
173:                journalled_system.start();
174:            }
175:
176:            /**
177:             * Stops the buffer manager.
178:             */
179:            public void stop() throws IOException {
180:                journalled_system.stop();
181:            }
182:
183:            // ----------
184:
185:            /**
186:             * Creates a new resource.
187:             */
188:            JournalledResource createResource(String resource_name) {
189:                return journalled_system.createResource(resource_name);
190:            }
191:
192:            /**
193:             * Obtains a write lock on the buffer.  This will block if a 'setCheckPoint'
194:             * is in progress, otherwise it will always succeed.
195:             */
196:            public void lockForWrite() throws InterruptedException {
197:                synchronized (write_lock) {
198:                    while (check_point_in_progress) {
199:                        write_lock.wait();
200:                    }
201:                    ++write_lock_count;
202:                }
203:            }
204:
205:            /**
206:             * Releases a write lock on the buffer.  This MUST be called if the
207:             * 'lockForWrite' method is called.  This should be called from a 'finally'
208:             * clause.
209:             */
210:            public void unlockForWrite() {
211:                synchronized (write_lock) {
212:                    --write_lock_count;
213:                    write_lock.notifyAll();
214:                }
215:            }
216:
217:            /**
218:             * Sets a check point in the log.  This logs a point in which a recovery
219:             * process should at least be able to be rebuild back to.  This will block
220:             * if there are any write locks.
221:             * <p>
222:             * Some things to keep in mind when using this.  You must ensure that no
223:             * writes can occur while this operation is occuring.  Typically this will
224:             * happen at the end of a commit but you need to ensure that nothing can
225:             * happen in the background, such as records being deleted or items being
226:             * inserted.  It is required that the 'no write' restriction is enforced at
227:             * a high level.  If care is not taken then the image written will not be
228:             * clean and if a crash occurs the image that is recovered will not be
229:             * stable.
230:             */
231:            public void setCheckPoint(boolean flush_journals)
232:                    throws IOException, InterruptedException {
233:
234:                // Wait until the writes have finished, and then set the
235:                // 'check_point_in_progress' boolean.
236:                synchronized (write_lock) {
237:                    while (write_lock_count > 0) {
238:                        write_lock.wait();
239:                    }
240:                    check_point_in_progress = true;
241:                }
242:
243:                try {
244:                    //      System.out.println("SET CHECKPOINT");
245:                    synchronized (page_map) {
246:                        // Flush all the pages out to the log.
247:                        for (int i = 0; i < page_map.length; ++i) {
248:                            BMPage page = page_map[i];
249:                            BMPage prev = null;
250:
251:                            while (page != null) {
252:                                boolean deleted_hash = false;
253:                                synchronized (page) {
254:                                    // Flush the page (will only actually flush if there are changes)
255:                                    page.flush();
256:
257:                                    // Remove this page if it is no longer in use
258:                                    if (page.notInUse()) {
259:                                        deleted_hash = true;
260:                                        if (prev == null) {
261:                                            page_map[i] = page.hash_next;
262:                                        } else {
263:                                            prev.hash_next = page.hash_next;
264:                                        }
265:                                    }
266:
267:                                }
268:                                // Go to next page in hash chain
269:                                if (!deleted_hash) {
270:                                    prev = page;
271:                                }
272:                                page = page.hash_next;
273:                            }
274:                        }
275:                    }
276:
277:                    journalled_system.setCheckPoint(flush_journals);
278:
279:                } finally {
280:                    // Make sure we unset the 'check_point_in_progress' boolean and notify
281:                    // any blockers.
282:                    synchronized (write_lock) {
283:                        check_point_in_progress = false;
284:                        write_lock.notifyAll();
285:                    }
286:                }
287:
288:            }
289:
290:            /**
291:             * Called when a new page is created.
292:             */
293:            private void pageCreated(final BMPage page) throws IOException {
294:                synchronized (T_lock) {
295:
296:                    if (PARANOID_CHECKS) {
297:                        int i = page_list.indexOf(page);
298:                        if (i != -1) {
299:                            BMPage f = (BMPage) page_list.get(i);
300:                            if (f == page) {
301:                                throw new Error(
302:                                        "Same page added multiple times.");
303:                            }
304:                            if (f != null) {
305:                                throw new Error("Duplicate pages.");
306:                            }
307:                        }
308:                    }
309:
310:                    page.t = current_T;
311:                    ++current_T;
312:
313:                    ++current_page_count;
314:                    page_list.add(page);
315:
316:                    // Below is the page purge algorithm.  If the maximum number of pages
317:                    // has been created we sort the page list weighting each page by time
318:                    // since last accessed and total number of accesses and clear the bottom
319:                    // 20% of this list.
320:
321:                    // Check if we should purge old pages and purge some if we do...
322:                    if (current_page_count > max_pages) {
323:                        // Purge 20% of the cache
324:                        // Sort the pages by the current formula,
325:                        //  ( 1 / page_access_count ) * (current_t - page_t)
326:                        // Further, if the page has written data then we multiply by 0.75.
327:                        // This scales down page writes so they have a better chance of
328:                        // surviving in the cache than page writes.
329:                        Object[] pages = page_list.toArray();
330:                        Arrays.sort(pages, PAGE_CACHE_COMPARATOR);
331:
332:                        int purge_size = Math.max((int) (pages.length * 0.20f),
333:                                2);
334:                        for (int i = 0; i < purge_size; ++i) {
335:                            BMPage dpage = (BMPage) pages[pages.length
336:                                    - (i + 1)];
337:                            synchronized (dpage) {
338:                                dpage.dispose();
339:                            }
340:                        }
341:
342:                        // Remove all the elements from page_list and set it with the sorted
343:                        // list (minus the elements we removed).
344:                        page_list.clear();
345:                        for (int i = 0; i < pages.length - purge_size; ++i) {
346:                            page_list.add(pages[i]);
347:                        }
348:
349:                        current_page_count -= purge_size;
350:
351:                    }
352:                }
353:            }
354:
355:            /**
356:             * Called when a page is accessed.
357:             */
358:            private void pageAccessed(BMPage page) {
359:                synchronized (T_lock) {
360:                    page.t = current_T;
361:                    ++current_T;
362:                    ++page.access_count;
363:                }
364:            }
365:
366:            /**
367:             * Calculates a hash code given an id value and a page_number value.
368:             */
369:            private static int calcHashCode(long id, long page_number) {
370:                return (int) ((id << 6) + (page_number * ((id + 25) << 2)));
371:            }
372:
373:            /**
374:             * Fetches and returns a page from a store.  Pages may be cached.  If the
375:             * page is not available in the cache then a new BMPage object is created
376:             * for the page requested.
377:             */
378:            private BMPage fetchPage(JournalledResource data,
379:                    final long page_number) throws IOException {
380:                final long id = data.getID();
381:
382:                BMPage prev_page = null;
383:                boolean new_page = false;
384:                BMPage page;
385:
386:                synchronized (page_map) {
387:                    // Generate the hash code for this page.
388:                    final int p = (calcHashCode(id, page_number) & 0x07FFFFFFF)
389:                            % page_map.length;
390:                    // Search for this page in the hash
391:                    page = page_map[p];
392:                    while (page != null && !page.isPage(id, page_number)) {
393:                        prev_page = page;
394:                        page = page.hash_next;
395:                    }
396:
397:                    // Page isn't found so create it and add to the cache
398:                    if (page == null) {
399:                        page = new BMPage(data, page_number, page_size);
400:                        // Add this page to the map
401:                        page.hash_next = page_map[p];
402:                        page_map[p] = page;
403:                    } else {
404:                        // Move this page to the head if it's not already at the head.
405:                        if (prev_page != null) {
406:                            prev_page.hash_next = page.hash_next;
407:                            page.hash_next = page_map[p];
408:                            page_map[p] = page;
409:                        }
410:                    }
411:
412:                    synchronized (page) {
413:                        // If page not in use then it must be newly setup, so add a
414:                        // reference.
415:                        if (page.notInUse()) {
416:                            page.reset();
417:                            new_page = true;
418:                            page.referenceAdd();
419:                        }
420:                        // Add a reference for this fetch
421:                        page.referenceAdd();
422:                    }
423:
424:                }
425:
426:                // If the page is new,
427:                if (new_page) {
428:                    pageCreated(page);
429:                } else {
430:                    pageAccessed(page);
431:                }
432:
433:                // Return the page.
434:                return page;
435:
436:            }
437:
438:            // ------
439:            // Buffered access methods.  These are all thread safe methods.  When a page
440:            // is accessed the page is synchronized so no 2 or more operations can
441:            // read/write from the page at the same time.  An operation can read/write to
442:            // different pages at the same time, however, and this requires thread safety
443:            // at a lower level (in the JournalledResource implementation).
444:            // ------
445:
446:            int readByteFrom(JournalledResource data, long position)
447:                    throws IOException {
448:                final long page_number = position / page_size;
449:                int v;
450:
451:                BMPage page = fetchPage(data, page_number);
452:                synchronized (page) {
453:                    try {
454:                        page.initialize();
455:                        v = ((int) page.read((int) (position % page_size))) & 0x0FF;
456:                    } finally {
457:                        page.dispose();
458:                    }
459:                }
460:
461:                return v;
462:            }
463:
464:            int readByteArrayFrom(JournalledResource data, long position,
465:                    byte[] buf, int off, int len) throws IOException {
466:
467:                final int orig_len = len;
468:                long page_number = position / page_size;
469:                int start_offset = (int) (position % page_size);
470:                int to_read = Math.min(len, page_size - start_offset);
471:
472:                BMPage page = fetchPage(data, page_number);
473:                synchronized (page) {
474:                    try {
475:                        page.initialize();
476:                        page.read(start_offset, buf, off, to_read);
477:                    } finally {
478:                        page.dispose();
479:                    }
480:                }
481:
482:                len -= to_read;
483:                while (len > 0) {
484:                    off += to_read;
485:                    position += to_read;
486:                    ++page_number;
487:                    to_read = Math.min(len, page_size);
488:
489:                    page = fetchPage(data, page_number);
490:                    synchronized (page) {
491:                        try {
492:                            page.initialize();
493:                            page.read(0, buf, off, to_read);
494:                        } finally {
495:                            page.dispose();
496:                        }
497:                    }
498:                    len -= to_read;
499:                }
500:
501:                return orig_len;
502:            }
503:
504:            void writeByteTo(JournalledResource data, long position, int b)
505:                    throws IOException {
506:
507:                if (PARANOID_CHECKS) {
508:                    synchronized (write_lock) {
509:                        if (write_lock_count == 0) {
510:                            System.out.println("Write without a lock!");
511:                            new Error().printStackTrace();
512:                        }
513:                    }
514:                }
515:
516:                final long page_number = position / page_size;
517:
518:                BMPage page = fetchPage(data, page_number);
519:                synchronized (page) {
520:                    try {
521:                        page.initialize();
522:                        page.write((int) (position % page_size), (byte) b);
523:                    } finally {
524:                        page.dispose();
525:                    }
526:                }
527:            }
528:
529:            void writeByteArrayTo(JournalledResource data, long position,
530:                    byte[] buf, int off, int len) throws IOException {
531:
532:                if (PARANOID_CHECKS) {
533:                    synchronized (write_lock) {
534:                        if (write_lock_count == 0) {
535:                            System.out.println("Write without a lock!");
536:                            new Error().printStackTrace();
537:                        }
538:                    }
539:                }
540:
541:                long page_number = position / page_size;
542:                int start_offset = (int) (position % page_size);
543:                int to_write = Math.min(len, page_size - start_offset);
544:
545:                BMPage page = fetchPage(data, page_number);
546:                synchronized (page) {
547:                    try {
548:                        page.initialize();
549:                        page.write(start_offset, buf, off, to_write);
550:                    } finally {
551:                        page.dispose();
552:                    }
553:                }
554:                len -= to_write;
555:
556:                while (len > 0) {
557:                    off += to_write;
558:                    position += to_write;
559:                    ++page_number;
560:                    to_write = Math.min(len, page_size);
561:
562:                    page = fetchPage(data, page_number);
563:                    synchronized (page) {
564:                        try {
565:                            page.initialize();
566:                            page.write(0, buf, off, to_write);
567:                        } finally {
568:                            page.dispose();
569:                        }
570:                    }
571:                    len -= to_write;
572:                }
573:
574:            }
575:
576:            void setDataAreaSize(JournalledResource data, long new_size)
577:                    throws IOException {
578:                data.setSize(new_size);
579:            }
580:
581:            long getDataAreaSize(JournalledResource data) throws IOException {
582:                return data.getSize();
583:            }
584:
585:            void close(JournalledResource data) throws IOException {
586:                long id = data.getID();
587:                // Flush all changes made to the resource then close.
588:                synchronized (page_map) {
589:                    //      System.out.println("Looking for id: " + id);
590:                    // Flush all the pages out to the log.
591:                    // This scans the entire hash for values and could be an expensive
592:                    // operation.  Fortunately 'close' isn't used all that often.
593:                    for (int i = 0; i < page_map.length; ++i) {
594:                        BMPage page = page_map[i];
595:                        BMPage prev = null;
596:
597:                        while (page != null) {
598:                            boolean deleted_hash = false;
599:                            if (page.getID() == id) {
600:                                //            System.out.println("Found page id: " + page.getID());
601:                                synchronized (page) {
602:                                    // Flush the page (will only actually flush if there are changes)
603:                                    page.flush();
604:
605:                                    // Remove this page if it is no longer in use
606:                                    if (page.notInUse()) {
607:                                        deleted_hash = true;
608:                                        if (prev == null) {
609:                                            page_map[i] = page.hash_next;
610:                                        } else {
611:                                            prev.hash_next = page.hash_next;
612:                                        }
613:                                    }
614:                                }
615:
616:                            }
617:
618:                            // Go to next page in hash chain
619:                            if (!deleted_hash) {
620:                                prev = page;
621:                            }
622:                            page = page.hash_next;
623:
624:                        }
625:                    }
626:                }
627:
628:                data.close();
629:            }
630:
631:            // ---------- Inner classes ----------
632:
633:            /**
634:             * A page from a store that is currently being cached in memory.  This is
635:             * also an element in the cache.
636:             */
637:            private static final class BMPage {
638:
639:                /**
640:                 * The StoreDataAccessor that the page content is part of.
641:                 */
642:                private final JournalledResource data;
643:
644:                /**
645:                 * The page number.
646:                 */
647:                private final long page;
648:
649:                /**
650:                 * The size of the page.
651:                 */
652:                private final int page_size;
653:
654:                /**
655:                 * The buffer that contains the data for this page.
656:                 */
657:                private byte[] buffer;
658:
659:                /**
660:                 * True if this page is initialized.
661:                 */
662:                private boolean initialized;
663:
664:                /**
665:                 * A reference to the next page with this hash key.
666:                 */
667:                BMPage hash_next;
668:
669:                /**
670:                 * The time this page was last accessed.  This value is reset each time
671:                 * the page is requested.
672:                 */
673:                long t;
674:
675:                /**
676:                 * The number of times this page has been accessed since it was created.
677:                 */
678:                int access_count;
679:
680:                /**
681:                 * The first position in the buffer that was last written.
682:                 */
683:                private int first_write_position;
684:
685:                /**
686:                 * The last position in the buffer that was last written.
687:                 */
688:                private int last_write_position;
689:
690:                /**
691:                 * The number of references on this page.
692:                 */
693:                private int reference_count;
694:
695:                /**
696:                 * Constructs the page.
697:                 */
698:                BMPage(JournalledResource data, long page, int page_size) {
699:                    this .data = data;
700:                    this .page = page;
701:                    this .reference_count = 0;
702:                    this .page_size = page_size;
703:                    reset();
704:                }
705:
706:                /**
707:                 * Resets this object.
708:                 */
709:                void reset() {
710:                    // Assert that this is 0
711:                    if (reference_count != 0) {
712:                        throw new Error(
713:                                "reset when 'reference_count' is != 0 ( = "
714:                                        + reference_count + " )");
715:                    }
716:                    this .initialized = false;
717:                    this .t = 0;
718:                    this .access_count = 0;
719:                }
720:
721:                /**
722:                 * Returns the id of the JournalledResource that is being buffered.
723:                 */
724:                long getID() {
725:                    return data.getID();
726:                }
727:
728:                /**
729:                 * Adds 1 to the reference counter on this page.
730:                 */
731:                void referenceAdd() {
732:                    ++reference_count;
733:                }
734:
735:                /**
736:                 * Removes 1 from the reference counter on this page.
737:                 */
738:                private void referenceRemove() {
739:                    if (reference_count <= 0) {
740:                        throw new Error("Too many reference remove.");
741:                    }
742:                    --reference_count;
743:                }
744:
745:                /**
746:                 * Returns true if this PageBuffer is not in use (has 0 reference count and
747:                 * is not inialized.
748:                 */
749:                boolean notInUse() {
750:                    return reference_count == 0;
751:                    //      return (reference_count <= 0 && !initialized);
752:                }
753:
754:                /**
755:                 * Returns true if this page matches the given id/page_number.
756:                 */
757:                boolean isPage(long in_id, long in_page) {
758:                    return (getID() == in_id && page == in_page);
759:                }
760:
761:                /**
762:                 * Reads the current page content into memory.  This may read from the
763:                 * data files or from a log.
764:                 */
765:                private void readPageContent(long page_number, byte[] buf,
766:                        int pos) throws IOException {
767:                    if (pos != 0) {
768:                        throw new Error("Assert failed: pos != 0");
769:                    }
770:                    // Read from the resource
771:                    data.read(page_number, buf, pos);
772:                }
773:
774:                /**
775:                 * Flushes this page out to disk, but does not remove from memory.  In a
776:                 * logging system this will flush the changes out to a log.
777:                 */
778:                void flush() throws IOException {
779:                    if (initialized) {
780:                        if (last_write_position > -1) {
781:                            // Write to the store data.
782:                            data.write(page, buffer, first_write_position,
783:                                    last_write_position - first_write_position);
784:                            //          System.out.println("FLUSH " + data + " off = " + first_write_position +
785:                            //                             " len = " + (last_write_position - first_write_position));
786:                        }
787:                        first_write_position = Integer.MAX_VALUE;
788:                        last_write_position = -1;
789:                    }
790:                }
791:
792:                /**
793:                 * Initializes the page buffer.  If the buffer is already initialized then
794:                 * we just return.  If it's not initialized we set up any internal
795:                 * structures that are required to be set up for access to this page.
796:                 */
797:                void initialize() throws IOException {
798:                    if (!initialized) {
799:
800:                        try {
801:
802:                            // Create the buffer to contain the page in memory
803:                            buffer = new byte[page_size];
804:                            // Read the page.  This will either read the page from the backing
805:                            // store or from a log.
806:                            readPageContent(page, buffer, 0);
807:                            initialized = true;
808:
809:                            //          access_count = 0;
810:                            first_write_position = Integer.MAX_VALUE;
811:                            last_write_position = -1;
812:
813:                        } catch (IOException e) {
814:                            // This makes debugging a little clearer if 'readPageContent' fails.
815:                            // When 'readPageContent' fails, the dispose method fails also.
816:                            System.out
817:                                    .println("IO Error during page initialize: "
818:                                            + e.getMessage());
819:                            e.printStackTrace();
820:                            throw e;
821:                        }
822:
823:                    }
824:                }
825:
826:                /**
827:                 * Disposes of the page buffer if it can be disposed (there are no
828:                 * references to the page and the page is initialized).  When disposed the
829:                 * memory used by the page is reclaimed and the content is written out to
830:                 * disk.
831:                 */
832:                void dispose() throws IOException {
833:                    referenceRemove();
834:                    if (reference_count == 0) {
835:                        if (initialized) {
836:
837:                            // Flushes the page from memory.  This will write the page out to the
838:                            // log.
839:                            flush();
840:
841:                            // Page is no longer initialized.
842:                            initialized = false;
843:                            // Clear the buffer from memory.
844:                            buffer = null;
845:
846:                        } else {
847:                            // This happens if initialization failed.  If this case we don't
848:                            // flush out the changes, but we do allow the page to be disposed
849:                            // in the normal way.
850:                            // Note that any exception generated by the initialization failure
851:                            // will propogate correctly.
852:                            buffer = null;
853:                            //          throw new RuntimeException(
854:                            //                "Assertion failed: tried to dispose an uninitialized page.");
855:                        }
856:                    }
857:                }
858:
859:                /**
860:                 * Reads a single byte from the cached page from memory.
861:                 */
862:                byte read(int pos) {
863:                    return buffer[pos];
864:                }
865:
866:                /**
867:                 * Reads a part of this page into the cached page from memory.
868:                 */
869:                void read(int pos, byte[] buf, int off, int len) {
870:                    System.arraycopy(buffer, pos, buf, off, len);
871:                }
872:
873:                /**
874:                 * Writes a single byte to the page in memory.
875:                 */
876:                void write(int pos, byte v) {
877:                    first_write_position = Math.min(pos, first_write_position);
878:                    last_write_position = Math
879:                            .max(pos + 1, last_write_position);
880:
881:                    buffer[pos] = v;
882:                }
883:
884:                /**
885:                 * Writes to the given part of the page in memory.
886:                 */
887:                void write(int pos, byte[] buf, int off, int len) {
888:                    first_write_position = Math.min(pos, first_write_position);
889:                    last_write_position = Math.max(pos + len,
890:                            last_write_position);
891:
892:                    System.arraycopy(buf, off, buffer, pos, len);
893:                }
894:
895:                public boolean equals(Object ob) {
896:                    BMPage dest_page = (BMPage) ob;
897:                    return isPage(dest_page.getID(), dest_page.page);
898:                }
899:
900:            }
901:
902:            /**
903:             * A data resource that is being buffered.
904:             */
905:            private static class BResource {
906:
907:                /**
908:                 * The id assigned to the resource.
909:                 */
910:                private final long id;
911:
912:                /**
913:                 * The unique name of the resource within the store.
914:                 */
915:                private final String name;
916:
917:                /**
918:                 * Constructs the resource.
919:                 */
920:                BResource(long id, String name) {
921:                    this .id = id;
922:                    this .name = name;
923:                }
924:
925:                /**
926:                 * Returns the id assigned to this resource.
927:                 */
928:                long getID() {
929:                    return id;
930:                }
931:
932:                /**
933:                 * Returns the name of this resource.
934:                 */
935:                String getName() {
936:                    return name;
937:                }
938:
939:            }
940:
941:            /**
942:             * A Comparator used to sort cache entries.
943:             */
944:            private final Comparator PAGE_CACHE_COMPARATOR = new Comparator() {
945:
946:                /**
947:                 * The calculation for finding the 'weight' of a page in the cache.  A
948:                 * heavier page is sorted lower and is therefore cleared from the cache
949:                 * faster.
950:                 */
951:                private final float pageEnumValue(BMPage page) {
952:                    // We fix the access counter so it can not exceed 10000 accesses.  I'm
953:                    // a little unsure if we should put this constant in the equation but it
954:                    // ensures that some old but highly accessed page will not stay in the
955:                    // cache forever.
956:                    final long bounded_page_count = Math.min(page.access_count,
957:                            10000);
958:                    final float v = (1f / bounded_page_count)
959:                            * (current_T - page.t);
960:                    return v;
961:                }
962:
963:                public int compare(Object ob1, Object ob2) {
964:                    float v1 = pageEnumValue((BMPage) ob1);
965:                    float v2 = pageEnumValue((BMPage) ob2);
966:                    if (v1 > v2) {
967:                        return 1;
968:                    } else if (v1 < v2) {
969:                        return -1;
970:                    }
971:                    return 0;
972:                }
973:
974:            };
975:
976:            /**
977:             * A factory interface for creating StoreDataAccessor objects from resource
978:             * names.
979:             */
980:            public static interface StoreDataAccessorFactory {
981:
982:                /**
983:                 * Returns a StoreDataAccessor object for the given resource name.
984:                 */
985:                public StoreDataAccessor createStoreDataAccessor(
986:                        String resource_name);
987:
988:            }
989:
990:        }
www.java2java.com | Contact Us
Copyright 2009 - 12 Demo Source and Support. All rights reserved.
All other trademarks are property of their respective owners.